2026-02-09 02:23:27.889075 | Job console starting 2026-02-09 02:23:27.900016 | Updating git repos 2026-02-09 02:23:27.982714 | Cloning repos into workspace 2026-02-09 02:23:28.272028 | Restoring repo states 2026-02-09 02:23:28.289519 | Merging changes 2026-02-09 02:23:28.289543 | Checking out repos 2026-02-09 02:23:28.571156 | Preparing playbooks 2026-02-09 02:23:29.205384 | Running Ansible setup 2026-02-09 02:23:33.577232 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-09 02:23:34.315090 | 2026-02-09 02:23:34.315289 | PLAY [Base pre] 2026-02-09 02:23:34.332015 | 2026-02-09 02:23:34.332148 | TASK [Setup log path fact] 2026-02-09 02:23:34.362582 | orchestrator | ok 2026-02-09 02:23:34.380111 | 2026-02-09 02:23:34.380265 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-09 02:23:34.426405 | orchestrator | ok 2026-02-09 02:23:34.441415 | 2026-02-09 02:23:34.441540 | TASK [emit-job-header : Print job information] 2026-02-09 02:23:34.489641 | # Job Information 2026-02-09 02:23:34.489897 | Ansible Version: 2.16.14 2026-02-09 02:23:34.489958 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-09 02:23:34.490015 | Pipeline: periodic-midnight 2026-02-09 02:23:34.490056 | Executor: 521e9411259a 2026-02-09 02:23:34.490092 | Triggered by: https://github.com/osism/testbed 2026-02-09 02:23:34.490129 | Event ID: 26ee74236e0649308c556c0adcc1fea7 2026-02-09 02:23:34.501361 | 2026-02-09 02:23:34.501499 | LOOP [emit-job-header : Print node information] 2026-02-09 02:23:34.629941 | orchestrator | ok: 2026-02-09 02:23:34.630223 | orchestrator | # Node Information 2026-02-09 02:23:34.630327 | orchestrator | Inventory Hostname: orchestrator 2026-02-09 02:23:34.630371 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-09 02:23:34.630406 | orchestrator | Username: zuul-testbed03 2026-02-09 02:23:34.630441 | orchestrator | Distro: Debian 12.13 2026-02-09 02:23:34.630479 | orchestrator | Provider: static-testbed 2026-02-09 02:23:34.630513 | orchestrator | Region: 2026-02-09 02:23:34.630547 | orchestrator | Label: testbed-orchestrator 2026-02-09 02:23:34.630580 | orchestrator | Product Name: OpenStack Nova 2026-02-09 02:23:34.630612 | orchestrator | Interface IP: 81.163.193.140 2026-02-09 02:23:34.654705 | 2026-02-09 02:23:34.654870 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-09 02:23:35.136766 | orchestrator -> localhost | changed 2026-02-09 02:23:35.149860 | 2026-02-09 02:23:35.150015 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-09 02:23:36.219607 | orchestrator -> localhost | changed 2026-02-09 02:23:36.242204 | 2026-02-09 02:23:36.242363 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-09 02:23:36.539901 | orchestrator -> localhost | ok 2026-02-09 02:23:36.548336 | 2026-02-09 02:23:36.548463 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-09 02:23:36.580556 | orchestrator | ok 2026-02-09 02:23:36.598579 | orchestrator | included: /var/lib/zuul/builds/498f0a2532124dcf97529f6199660ac9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-09 02:23:36.606630 | 2026-02-09 02:23:36.606732 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-09 02:23:38.037465 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-09 02:23:38.037981 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/498f0a2532124dcf97529f6199660ac9/work/498f0a2532124dcf97529f6199660ac9_id_rsa 2026-02-09 02:23:38.038098 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/498f0a2532124dcf97529f6199660ac9/work/498f0a2532124dcf97529f6199660ac9_id_rsa.pub 2026-02-09 02:23:38.038178 | orchestrator -> localhost | The key fingerprint is: 2026-02-09 02:23:38.038272 | orchestrator -> localhost | SHA256:428QBCs1A2ywag+z9AsscJh8FNJTlSulPUwbz5yASPs zuul-build-sshkey 2026-02-09 02:23:38.038342 | orchestrator -> localhost | The key's randomart image is: 2026-02-09 02:23:38.038429 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-09 02:23:38.038492 | orchestrator -> localhost | | .+=+oBo. | 2026-02-09 02:23:38.038555 | orchestrator -> localhost | | .==o X. | 2026-02-09 02:23:38.038614 | orchestrator -> localhost | | .+o B.O . | 2026-02-09 02:23:38.038672 | orchestrator -> localhost | |.+. .+ *.= | 2026-02-09 02:23:38.038729 | orchestrator -> localhost | |=*.. E. S. | 2026-02-09 02:23:38.038809 | orchestrator -> localhost | |=.B ... | 2026-02-09 02:23:38.038910 | orchestrator -> localhost | |.+ o .. | 2026-02-09 02:23:38.038973 | orchestrator -> localhost | |. . . .. | 2026-02-09 02:23:38.039035 | orchestrator -> localhost | | . .. | 2026-02-09 02:23:38.039094 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-09 02:23:38.039235 | orchestrator -> localhost | ok: Runtime: 0:00:00.943293 2026-02-09 02:23:38.053918 | 2026-02-09 02:23:38.054083 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-09 02:23:38.089471 | orchestrator | ok 2026-02-09 02:23:38.102262 | orchestrator | included: /var/lib/zuul/builds/498f0a2532124dcf97529f6199660ac9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-09 02:23:38.111571 | 2026-02-09 02:23:38.111669 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-09 02:23:38.135425 | orchestrator | skipping: Conditional result was False 2026-02-09 02:23:38.150041 | 2026-02-09 02:23:38.150187 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-09 02:23:38.796925 | orchestrator | changed 2026-02-09 02:23:38.806452 | 2026-02-09 02:23:38.806575 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-09 02:23:39.166381 | orchestrator | ok 2026-02-09 02:23:39.175299 | 2026-02-09 02:23:39.175442 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-09 02:23:39.592998 | orchestrator | ok 2026-02-09 02:23:39.599287 | 2026-02-09 02:23:39.599398 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-09 02:23:40.013511 | orchestrator | ok 2026-02-09 02:23:40.021965 | 2026-02-09 02:23:40.022092 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-09 02:23:40.046210 | orchestrator | skipping: Conditional result was False 2026-02-09 02:23:40.056591 | 2026-02-09 02:23:40.056739 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-09 02:23:40.527499 | orchestrator -> localhost | changed 2026-02-09 02:23:40.541568 | 2026-02-09 02:23:40.541680 | TASK [add-build-sshkey : Add back temp key] 2026-02-09 02:23:40.889903 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/498f0a2532124dcf97529f6199660ac9/work/498f0a2532124dcf97529f6199660ac9_id_rsa (zuul-build-sshkey) 2026-02-09 02:23:40.890467 | orchestrator -> localhost | ok: Runtime: 0:00:00.017426 2026-02-09 02:23:40.906560 | 2026-02-09 02:23:40.906718 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-09 02:23:41.380050 | orchestrator | ok 2026-02-09 02:23:41.389369 | 2026-02-09 02:23:41.389513 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-09 02:23:41.424735 | orchestrator | skipping: Conditional result was False 2026-02-09 02:23:41.483745 | 2026-02-09 02:23:41.483878 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-09 02:23:41.950709 | orchestrator | ok 2026-02-09 02:23:41.968008 | 2026-02-09 02:23:41.968139 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-09 02:23:42.014246 | orchestrator | ok 2026-02-09 02:23:42.024760 | 2026-02-09 02:23:42.024892 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-09 02:23:42.332328 | orchestrator -> localhost | ok 2026-02-09 02:23:42.340614 | 2026-02-09 02:23:42.340735 | TASK [validate-host : Collect information about the host] 2026-02-09 02:23:43.601031 | orchestrator | ok 2026-02-09 02:23:43.619099 | 2026-02-09 02:23:43.619228 | TASK [validate-host : Sanitize hostname] 2026-02-09 02:23:43.685411 | orchestrator | ok 2026-02-09 02:23:43.693595 | 2026-02-09 02:23:43.693730 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-09 02:23:44.265583 | orchestrator -> localhost | changed 2026-02-09 02:23:44.278273 | 2026-02-09 02:23:44.278420 | TASK [validate-host : Collect information about zuul worker] 2026-02-09 02:23:44.803404 | orchestrator | ok 2026-02-09 02:23:44.812538 | 2026-02-09 02:23:44.812691 | TASK [validate-host : Write out all zuul information for each host] 2026-02-09 02:23:45.375822 | orchestrator -> localhost | changed 2026-02-09 02:23:45.395090 | 2026-02-09 02:23:45.395242 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-09 02:23:45.725379 | orchestrator | ok 2026-02-09 02:23:45.734341 | 2026-02-09 02:23:45.734470 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-09 02:24:08.259587 | orchestrator | changed: 2026-02-09 02:24:08.259823 | orchestrator | .d..t...... src/ 2026-02-09 02:24:08.259859 | orchestrator | .d..t...... src/github.com/ 2026-02-09 02:24:08.259884 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-09 02:24:08.259906 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-09 02:24:08.259927 | orchestrator | RedHat.yml 2026-02-09 02:24:08.274374 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-09 02:24:08.274391 | orchestrator | RedHat.yml 2026-02-09 02:24:08.274444 | orchestrator | = 2.2.0"... 2026-02-09 02:24:18.695830 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-09 02:24:18.716293 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-02-09 02:24:19.199694 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-09 02:24:19.824539 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-09 02:24:19.896789 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-09 02:24:20.817326 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-09 02:24:21.192372 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-09 02:24:22.095413 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-09 02:24:22.095480 | orchestrator | 2026-02-09 02:24:22.095489 | orchestrator | Providers are signed by their developers. 2026-02-09 02:24:22.095495 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-09 02:24:22.095502 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-09 02:24:22.095513 | orchestrator | 2026-02-09 02:24:22.095519 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-09 02:24:22.095537 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-09 02:24:22.095542 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-09 02:24:22.095548 | orchestrator | you run "tofu init" in the future. 2026-02-09 02:24:22.095849 | orchestrator | 2026-02-09 02:24:22.095921 | orchestrator | OpenTofu has been successfully initialized! 2026-02-09 02:24:22.095928 | orchestrator | 2026-02-09 02:24:22.095933 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-09 02:24:22.095938 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-09 02:24:22.095942 | orchestrator | should now work. 2026-02-09 02:24:22.095946 | orchestrator | 2026-02-09 02:24:22.095950 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-09 02:24:22.095955 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-09 02:24:22.095960 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-09 02:24:22.283998 | orchestrator | Created and switched to workspace "ci"! 2026-02-09 02:24:22.284073 | orchestrator | 2026-02-09 02:24:22.284083 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-09 02:24:22.284090 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-09 02:24:22.284097 | orchestrator | for this configuration. 2026-02-09 02:24:22.436400 | orchestrator | ci.auto.tfvars 2026-02-09 02:24:22.440772 | orchestrator | default_custom.tf 2026-02-09 02:24:23.440189 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-09 02:24:23.985989 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-09 02:24:24.270670 | orchestrator | 2026-02-09 02:24:24.270749 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-09 02:24:24.270757 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-09 02:24:24.270763 | orchestrator | + create 2026-02-09 02:24:24.270768 | orchestrator | <= read (data resources) 2026-02-09 02:24:24.270781 | orchestrator | 2026-02-09 02:24:24.270785 | orchestrator | OpenTofu will perform the following actions: 2026-02-09 02:24:24.270906 | orchestrator | 2026-02-09 02:24:24.270912 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-09 02:24:24.270917 | orchestrator | # (config refers to values not yet known) 2026-02-09 02:24:24.270922 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-09 02:24:24.270926 | orchestrator | + checksum = (known after apply) 2026-02-09 02:24:24.270930 | orchestrator | + created_at = (known after apply) 2026-02-09 02:24:24.270934 | orchestrator | + file = (known after apply) 2026-02-09 02:24:24.270938 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.270961 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.270965 | orchestrator | + min_disk_gb = (known after apply) 2026-02-09 02:24:24.270969 | orchestrator | + min_ram_mb = (known after apply) 2026-02-09 02:24:24.270973 | orchestrator | + most_recent = true 2026-02-09 02:24:24.270977 | orchestrator | + name = (known after apply) 2026-02-09 02:24:24.270981 | orchestrator | + protected = (known after apply) 2026-02-09 02:24:24.270984 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.270991 | orchestrator | + schema = (known after apply) 2026-02-09 02:24:24.270995 | orchestrator | + size_bytes = (known after apply) 2026-02-09 02:24:24.270999 | orchestrator | + tags = (known after apply) 2026-02-09 02:24:24.271003 | orchestrator | + updated_at = (known after apply) 2026-02-09 02:24:24.271007 | orchestrator | } 2026-02-09 02:24:24.271035 | orchestrator | 2026-02-09 02:24:24.271040 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-09 02:24:24.271044 | orchestrator | # (config refers to values not yet known) 2026-02-09 02:24:24.271048 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-09 02:24:24.271052 | orchestrator | + checksum = (known after apply) 2026-02-09 02:24:24.271056 | orchestrator | + created_at = (known after apply) 2026-02-09 02:24:24.271060 | orchestrator | + file = (known after apply) 2026-02-09 02:24:24.271064 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.271067 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.271071 | orchestrator | + min_disk_gb = (known after apply) 2026-02-09 02:24:24.271075 | orchestrator | + min_ram_mb = (known after apply) 2026-02-09 02:24:24.271079 | orchestrator | + most_recent = true 2026-02-09 02:24:24.271082 | orchestrator | + name = (known after apply) 2026-02-09 02:24:24.271086 | orchestrator | + protected = (known after apply) 2026-02-09 02:24:24.271090 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.271094 | orchestrator | + schema = (known after apply) 2026-02-09 02:24:24.271098 | orchestrator | + size_bytes = (known after apply) 2026-02-09 02:24:24.271101 | orchestrator | + tags = (known after apply) 2026-02-09 02:24:24.271105 | orchestrator | + updated_at = (known after apply) 2026-02-09 02:24:24.271109 | orchestrator | } 2026-02-09 02:24:24.271137 | orchestrator | 2026-02-09 02:24:24.271142 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-09 02:24:24.271146 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-09 02:24:24.271164 | orchestrator | + content = (known after apply) 2026-02-09 02:24:24.271168 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-09 02:24:24.271172 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-09 02:24:24.271176 | orchestrator | + content_md5 = (known after apply) 2026-02-09 02:24:24.271180 | orchestrator | + content_sha1 = (known after apply) 2026-02-09 02:24:24.271183 | orchestrator | + content_sha256 = (known after apply) 2026-02-09 02:24:24.271187 | orchestrator | + content_sha512 = (known after apply) 2026-02-09 02:24:24.271191 | orchestrator | + directory_permission = "0777" 2026-02-09 02:24:24.271195 | orchestrator | + file_permission = "0644" 2026-02-09 02:24:24.271199 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-09 02:24:24.271202 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.271206 | orchestrator | } 2026-02-09 02:24:24.271264 | orchestrator | 2026-02-09 02:24:24.271273 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-09 02:24:24.271277 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-09 02:24:24.271281 | orchestrator | + content = (known after apply) 2026-02-09 02:24:24.271285 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-09 02:24:24.271288 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-09 02:24:24.271292 | orchestrator | + content_md5 = (known after apply) 2026-02-09 02:24:24.271296 | orchestrator | + content_sha1 = (known after apply) 2026-02-09 02:24:24.271299 | orchestrator | + content_sha256 = (known after apply) 2026-02-09 02:24:24.271312 | orchestrator | + content_sha512 = (known after apply) 2026-02-09 02:24:24.271316 | orchestrator | + directory_permission = "0777" 2026-02-09 02:24:24.271319 | orchestrator | + file_permission = "0644" 2026-02-09 02:24:24.271328 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-09 02:24:24.271332 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.271335 | orchestrator | } 2026-02-09 02:24:24.271374 | orchestrator | 2026-02-09 02:24:24.271379 | orchestrator | # local_file.inventory will be created 2026-02-09 02:24:24.271386 | orchestrator | + resource "local_file" "inventory" { 2026-02-09 02:24:24.271390 | orchestrator | + content = (known after apply) 2026-02-09 02:24:24.271394 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-09 02:24:24.271398 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-09 02:24:24.271401 | orchestrator | + content_md5 = (known after apply) 2026-02-09 02:24:24.271405 | orchestrator | + content_sha1 = (known after apply) 2026-02-09 02:24:24.271409 | orchestrator | + content_sha256 = (known after apply) 2026-02-09 02:24:24.271413 | orchestrator | + content_sha512 = (known after apply) 2026-02-09 02:24:24.271417 | orchestrator | + directory_permission = "0777" 2026-02-09 02:24:24.271421 | orchestrator | + file_permission = "0644" 2026-02-09 02:24:24.271424 | orchestrator | + filename = "inventory.ci" 2026-02-09 02:24:24.271428 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.271432 | orchestrator | } 2026-02-09 02:24:24.271482 | orchestrator | 2026-02-09 02:24:24.271491 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-09 02:24:24.271495 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-09 02:24:24.271499 | orchestrator | + content = (sensitive value) 2026-02-09 02:24:24.271502 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-09 02:24:24.271506 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-09 02:24:24.271510 | orchestrator | + content_md5 = (known after apply) 2026-02-09 02:24:24.271514 | orchestrator | + content_sha1 = (known after apply) 2026-02-09 02:24:24.271517 | orchestrator | + content_sha256 = (known after apply) 2026-02-09 02:24:24.271521 | orchestrator | + content_sha512 = (known after apply) 2026-02-09 02:24:24.271525 | orchestrator | + directory_permission = "0700" 2026-02-09 02:24:24.271571 | orchestrator | + file_permission = "0600" 2026-02-09 02:24:24.271576 | orchestrator | + filename = ".id_rsa.ci" 2026-02-09 02:24:24.271579 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.271584 | orchestrator | } 2026-02-09 02:24:24.271590 | orchestrator | 2026-02-09 02:24:24.271594 | orchestrator | # null_resource.node_semaphore will be created 2026-02-09 02:24:24.271598 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-09 02:24:24.271602 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.271606 | orchestrator | } 2026-02-09 02:24:24.271609 | orchestrator | 2026-02-09 02:24:24.271613 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-09 02:24:24.271617 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-09 02:24:24.271621 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.271624 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.271628 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.271632 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.271636 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.271640 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-09 02:24:24.271643 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.271647 | orchestrator | + size = 80 2026-02-09 02:24:24.271651 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.271654 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.271658 | orchestrator | } 2026-02-09 02:24:24.271706 | orchestrator | 2026-02-09 02:24:24.271715 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-09 02:24:24.271719 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-09 02:24:24.271723 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.271727 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.271730 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.271739 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.271743 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.271746 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-09 02:24:24.271750 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.271754 | orchestrator | + size = 80 2026-02-09 02:24:24.271758 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.271762 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.271765 | orchestrator | } 2026-02-09 02:24:24.271805 | orchestrator | 2026-02-09 02:24:24.271814 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-09 02:24:24.271818 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-09 02:24:24.271822 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.271825 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.271829 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.271833 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.271837 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.271841 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-09 02:24:24.271844 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.271848 | orchestrator | + size = 80 2026-02-09 02:24:24.271852 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.271856 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.271859 | orchestrator | } 2026-02-09 02:24:24.271865 | orchestrator | 2026-02-09 02:24:24.271869 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-09 02:24:24.271873 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-09 02:24:24.271876 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.271880 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.271884 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.271888 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.271891 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.271895 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-09 02:24:24.271899 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.271902 | orchestrator | + size = 80 2026-02-09 02:24:24.271910 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.271914 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.271918 | orchestrator | } 2026-02-09 02:24:24.272061 | orchestrator | 2026-02-09 02:24:24.272067 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-09 02:24:24.272071 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-09 02:24:24.272075 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272078 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272082 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272086 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.272089 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272093 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-09 02:24:24.272097 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272101 | orchestrator | + size = 80 2026-02-09 02:24:24.272104 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272108 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272112 | orchestrator | } 2026-02-09 02:24:24.272143 | orchestrator | 2026-02-09 02:24:24.272148 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-09 02:24:24.272183 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-09 02:24:24.272187 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272191 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272194 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272202 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.272206 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272210 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-09 02:24:24.272214 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272218 | orchestrator | + size = 80 2026-02-09 02:24:24.272222 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272225 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272229 | orchestrator | } 2026-02-09 02:24:24.272235 | orchestrator | 2026-02-09 02:24:24.272239 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-09 02:24:24.272242 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-09 02:24:24.272246 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272250 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272254 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272258 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.272261 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272265 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-09 02:24:24.272269 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272273 | orchestrator | + size = 80 2026-02-09 02:24:24.272276 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272280 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272284 | orchestrator | } 2026-02-09 02:24:24.272289 | orchestrator | 2026-02-09 02:24:24.272293 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-09 02:24:24.272297 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-09 02:24:24.272301 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272305 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272308 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272312 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272316 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-09 02:24:24.272320 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272323 | orchestrator | + size = 20 2026-02-09 02:24:24.272327 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272331 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272335 | orchestrator | } 2026-02-09 02:24:24.272340 | orchestrator | 2026-02-09 02:24:24.272344 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-09 02:24:24.272348 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-09 02:24:24.272352 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272356 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272359 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272363 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272367 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-09 02:24:24.272371 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272374 | orchestrator | + size = 20 2026-02-09 02:24:24.272378 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272382 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272385 | orchestrator | } 2026-02-09 02:24:24.272390 | orchestrator | 2026-02-09 02:24:24.272394 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-09 02:24:24.272398 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-09 02:24:24.272402 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272406 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272409 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272413 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272417 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-09 02:24:24.272421 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272436 | orchestrator | + size = 20 2026-02-09 02:24:24.272440 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272443 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272447 | orchestrator | } 2026-02-09 02:24:24.272453 | orchestrator | 2026-02-09 02:24:24.272457 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-09 02:24:24.272460 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-09 02:24:24.272464 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272468 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272472 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272479 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272483 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-09 02:24:24.272487 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272490 | orchestrator | + size = 20 2026-02-09 02:24:24.272494 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272498 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272502 | orchestrator | } 2026-02-09 02:24:24.272507 | orchestrator | 2026-02-09 02:24:24.272511 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-09 02:24:24.272515 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-09 02:24:24.272519 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272522 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272527 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272530 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272534 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-09 02:24:24.272538 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272542 | orchestrator | + size = 20 2026-02-09 02:24:24.272546 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272549 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272553 | orchestrator | } 2026-02-09 02:24:24.272558 | orchestrator | 2026-02-09 02:24:24.272562 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-09 02:24:24.272566 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-09 02:24:24.272570 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272574 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272577 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272581 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272585 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-09 02:24:24.272589 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272592 | orchestrator | + size = 20 2026-02-09 02:24:24.272596 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272600 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272604 | orchestrator | } 2026-02-09 02:24:24.272609 | orchestrator | 2026-02-09 02:24:24.272613 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-09 02:24:24.272617 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-09 02:24:24.272620 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272624 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272628 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272632 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272635 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-09 02:24:24.272639 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272643 | orchestrator | + size = 20 2026-02-09 02:24:24.272647 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272650 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272654 | orchestrator | } 2026-02-09 02:24:24.272659 | orchestrator | 2026-02-09 02:24:24.272663 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-09 02:24:24.272667 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-09 02:24:24.272674 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272678 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272682 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272686 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272689 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-09 02:24:24.272693 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272697 | orchestrator | + size = 20 2026-02-09 02:24:24.272700 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272704 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272708 | orchestrator | } 2026-02-09 02:24:24.272713 | orchestrator | 2026-02-09 02:24:24.272717 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-09 02:24:24.272721 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-09 02:24:24.272725 | orchestrator | + attachment = (known after apply) 2026-02-09 02:24:24.272728 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272732 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272736 | orchestrator | + metadata = (known after apply) 2026-02-09 02:24:24.272740 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-09 02:24:24.272743 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272747 | orchestrator | + size = 20 2026-02-09 02:24:24.272751 | orchestrator | + volume_retype_policy = "never" 2026-02-09 02:24:24.272754 | orchestrator | + volume_type = "ssd" 2026-02-09 02:24:24.272758 | orchestrator | } 2026-02-09 02:24:24.272882 | orchestrator | 2026-02-09 02:24:24.272890 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-09 02:24:24.272894 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-09 02:24:24.272897 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-09 02:24:24.272901 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-09 02:24:24.272905 | orchestrator | + all_metadata = (known after apply) 2026-02-09 02:24:24.272908 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.272912 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.272916 | orchestrator | + config_drive = true 2026-02-09 02:24:24.272923 | orchestrator | + created = (known after apply) 2026-02-09 02:24:24.272927 | orchestrator | + flavor_id = (known after apply) 2026-02-09 02:24:24.272930 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-09 02:24:24.272934 | orchestrator | + force_delete = false 2026-02-09 02:24:24.272938 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-09 02:24:24.272942 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.272945 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.272949 | orchestrator | + image_name = (known after apply) 2026-02-09 02:24:24.272953 | orchestrator | + key_pair = "testbed" 2026-02-09 02:24:24.272956 | orchestrator | + name = "testbed-manager" 2026-02-09 02:24:24.272960 | orchestrator | + power_state = "active" 2026-02-09 02:24:24.272964 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.272968 | orchestrator | + security_groups = (known after apply) 2026-02-09 02:24:24.272971 | orchestrator | + stop_before_destroy = false 2026-02-09 02:24:24.272975 | orchestrator | + updated = (known after apply) 2026-02-09 02:24:24.272979 | orchestrator | + user_data = (sensitive value) 2026-02-09 02:24:24.272982 | orchestrator | 2026-02-09 02:24:24.272986 | orchestrator | + block_device { 2026-02-09 02:24:24.272990 | orchestrator | + boot_index = 0 2026-02-09 02:24:24.272994 | orchestrator | + delete_on_termination = false 2026-02-09 02:24:24.272998 | orchestrator | + destination_type = "volume" 2026-02-09 02:24:24.273001 | orchestrator | + multiattach = false 2026-02-09 02:24:24.273005 | orchestrator | + source_type = "volume" 2026-02-09 02:24:24.273009 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.273016 | orchestrator | } 2026-02-09 02:24:24.273020 | orchestrator | 2026-02-09 02:24:24.273024 | orchestrator | + network { 2026-02-09 02:24:24.273028 | orchestrator | + access_network = false 2026-02-09 02:24:24.273032 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-09 02:24:24.273036 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-09 02:24:24.273039 | orchestrator | + mac = (known after apply) 2026-02-09 02:24:24.273043 | orchestrator | + name = (known after apply) 2026-02-09 02:24:24.273047 | orchestrator | + port = (known after apply) 2026-02-09 02:24:24.273051 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.273054 | orchestrator | } 2026-02-09 02:24:24.273058 | orchestrator | } 2026-02-09 02:24:24.273238 | orchestrator | 2026-02-09 02:24:24.273244 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-09 02:24:24.273248 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-09 02:24:24.273252 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-09 02:24:24.273256 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-09 02:24:24.273260 | orchestrator | + all_metadata = (known after apply) 2026-02-09 02:24:24.273263 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.273267 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.273271 | orchestrator | + config_drive = true 2026-02-09 02:24:24.273275 | orchestrator | + created = (known after apply) 2026-02-09 02:24:24.273278 | orchestrator | + flavor_id = (known after apply) 2026-02-09 02:24:24.273282 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-09 02:24:24.273286 | orchestrator | + force_delete = false 2026-02-09 02:24:24.273290 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-09 02:24:24.273294 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.273297 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.273301 | orchestrator | + image_name = (known after apply) 2026-02-09 02:24:24.273305 | orchestrator | + key_pair = "testbed" 2026-02-09 02:24:24.273309 | orchestrator | + name = "testbed-node-0" 2026-02-09 02:24:24.273312 | orchestrator | + power_state = "active" 2026-02-09 02:24:24.273316 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.273320 | orchestrator | + security_groups = (known after apply) 2026-02-09 02:24:24.273324 | orchestrator | + stop_before_destroy = false 2026-02-09 02:24:24.273327 | orchestrator | + updated = (known after apply) 2026-02-09 02:24:24.273331 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-09 02:24:24.273335 | orchestrator | 2026-02-09 02:24:24.273339 | orchestrator | + block_device { 2026-02-09 02:24:24.273343 | orchestrator | + boot_index = 0 2026-02-09 02:24:24.273347 | orchestrator | + delete_on_termination = false 2026-02-09 02:24:24.273350 | orchestrator | + destination_type = "volume" 2026-02-09 02:24:24.273354 | orchestrator | + multiattach = false 2026-02-09 02:24:24.273358 | orchestrator | + source_type = "volume" 2026-02-09 02:24:24.273362 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.273365 | orchestrator | } 2026-02-09 02:24:24.273369 | orchestrator | 2026-02-09 02:24:24.273373 | orchestrator | + network { 2026-02-09 02:24:24.273377 | orchestrator | + access_network = false 2026-02-09 02:24:24.273381 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-09 02:24:24.273384 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-09 02:24:24.273388 | orchestrator | + mac = (known after apply) 2026-02-09 02:24:24.273392 | orchestrator | + name = (known after apply) 2026-02-09 02:24:24.273396 | orchestrator | + port = (known after apply) 2026-02-09 02:24:24.273400 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.273404 | orchestrator | } 2026-02-09 02:24:24.273408 | orchestrator | } 2026-02-09 02:24:24.273570 | orchestrator | 2026-02-09 02:24:24.273577 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-09 02:24:24.273581 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-09 02:24:24.273585 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-09 02:24:24.273593 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-09 02:24:24.273597 | orchestrator | + all_metadata = (known after apply) 2026-02-09 02:24:24.273600 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.273604 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.273608 | orchestrator | + config_drive = true 2026-02-09 02:24:24.273611 | orchestrator | + created = (known after apply) 2026-02-09 02:24:24.273615 | orchestrator | + flavor_id = (known after apply) 2026-02-09 02:24:24.273619 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-09 02:24:24.273623 | orchestrator | + force_delete = false 2026-02-09 02:24:24.273626 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-09 02:24:24.273630 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.273634 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.273638 | orchestrator | + image_name = (known after apply) 2026-02-09 02:24:24.273641 | orchestrator | + key_pair = "testbed" 2026-02-09 02:24:24.273645 | orchestrator | + name = "testbed-node-1" 2026-02-09 02:24:24.273649 | orchestrator | + power_state = "active" 2026-02-09 02:24:24.273653 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.273657 | orchestrator | + security_groups = (known after apply) 2026-02-09 02:24:24.273660 | orchestrator | + stop_before_destroy = false 2026-02-09 02:24:24.273664 | orchestrator | + updated = (known after apply) 2026-02-09 02:24:24.273671 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-09 02:24:24.273675 | orchestrator | 2026-02-09 02:24:24.273679 | orchestrator | + block_device { 2026-02-09 02:24:24.273683 | orchestrator | + boot_index = 0 2026-02-09 02:24:24.273687 | orchestrator | + delete_on_termination = false 2026-02-09 02:24:24.273690 | orchestrator | + destination_type = "volume" 2026-02-09 02:24:24.273694 | orchestrator | + multiattach = false 2026-02-09 02:24:24.273698 | orchestrator | + source_type = "volume" 2026-02-09 02:24:24.273702 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.273705 | orchestrator | } 2026-02-09 02:24:24.273709 | orchestrator | 2026-02-09 02:24:24.273713 | orchestrator | + network { 2026-02-09 02:24:24.273716 | orchestrator | + access_network = false 2026-02-09 02:24:24.273720 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-09 02:24:24.273724 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-09 02:24:24.273728 | orchestrator | + mac = (known after apply) 2026-02-09 02:24:24.273732 | orchestrator | + name = (known after apply) 2026-02-09 02:24:24.273735 | orchestrator | + port = (known after apply) 2026-02-09 02:24:24.273739 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.273743 | orchestrator | } 2026-02-09 02:24:24.273747 | orchestrator | } 2026-02-09 02:24:24.273866 | orchestrator | 2026-02-09 02:24:24.273872 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-09 02:24:24.273876 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-09 02:24:24.273880 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-09 02:24:24.273884 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-09 02:24:24.273888 | orchestrator | + all_metadata = (known after apply) 2026-02-09 02:24:24.273892 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.273896 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.273900 | orchestrator | + config_drive = true 2026-02-09 02:24:24.273903 | orchestrator | + created = (known after apply) 2026-02-09 02:24:24.273907 | orchestrator | + flavor_id = (known after apply) 2026-02-09 02:24:24.273911 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-09 02:24:24.273914 | orchestrator | + force_delete = false 2026-02-09 02:24:24.273918 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-09 02:24:24.273922 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.273926 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.273933 | orchestrator | + image_name = (known after apply) 2026-02-09 02:24:24.273937 | orchestrator | + key_pair = "testbed" 2026-02-09 02:24:24.273941 | orchestrator | + name = "testbed-node-2" 2026-02-09 02:24:24.273945 | orchestrator | + power_state = "active" 2026-02-09 02:24:24.273948 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.273952 | orchestrator | + security_groups = (known after apply) 2026-02-09 02:24:24.273956 | orchestrator | + stop_before_destroy = false 2026-02-09 02:24:24.273960 | orchestrator | + updated = (known after apply) 2026-02-09 02:24:24.273963 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-09 02:24:24.273967 | orchestrator | 2026-02-09 02:24:24.273971 | orchestrator | + block_device { 2026-02-09 02:24:24.273974 | orchestrator | + boot_index = 0 2026-02-09 02:24:24.273978 | orchestrator | + delete_on_termination = false 2026-02-09 02:24:24.273982 | orchestrator | + destination_type = "volume" 2026-02-09 02:24:24.273986 | orchestrator | + multiattach = false 2026-02-09 02:24:24.273989 | orchestrator | + source_type = "volume" 2026-02-09 02:24:24.273993 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.273997 | orchestrator | } 2026-02-09 02:24:24.274000 | orchestrator | 2026-02-09 02:24:24.274004 | orchestrator | + network { 2026-02-09 02:24:24.274008 | orchestrator | + access_network = false 2026-02-09 02:24:24.274012 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-09 02:24:24.274042 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-09 02:24:24.274046 | orchestrator | + mac = (known after apply) 2026-02-09 02:24:24.274050 | orchestrator | + name = (known after apply) 2026-02-09 02:24:24.274053 | orchestrator | + port = (known after apply) 2026-02-09 02:24:24.274057 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.274061 | orchestrator | } 2026-02-09 02:24:24.274065 | orchestrator | } 2026-02-09 02:24:24.274408 | orchestrator | 2026-02-09 02:24:24.274421 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-09 02:24:24.274427 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-09 02:24:24.274431 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-09 02:24:24.274435 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-09 02:24:24.274439 | orchestrator | + all_metadata = (known after apply) 2026-02-09 02:24:24.274443 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.274446 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.274450 | orchestrator | + config_drive = true 2026-02-09 02:24:24.274454 | orchestrator | + created = (known after apply) 2026-02-09 02:24:24.274457 | orchestrator | + flavor_id = (known after apply) 2026-02-09 02:24:24.274461 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-09 02:24:24.274465 | orchestrator | + force_delete = false 2026-02-09 02:24:24.274469 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-09 02:24:24.274472 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.274476 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.274480 | orchestrator | + image_name = (known after apply) 2026-02-09 02:24:24.274483 | orchestrator | + key_pair = "testbed" 2026-02-09 02:24:24.274487 | orchestrator | + name = "testbed-node-3" 2026-02-09 02:24:24.274491 | orchestrator | + power_state = "active" 2026-02-09 02:24:24.274495 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.274498 | orchestrator | + security_groups = (known after apply) 2026-02-09 02:24:24.274502 | orchestrator | + stop_before_destroy = false 2026-02-09 02:24:24.274506 | orchestrator | + updated = (known after apply) 2026-02-09 02:24:24.274509 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-09 02:24:24.274513 | orchestrator | 2026-02-09 02:24:24.274517 | orchestrator | + block_device { 2026-02-09 02:24:24.274521 | orchestrator | + boot_index = 0 2026-02-09 02:24:24.274524 | orchestrator | + delete_on_termination = false 2026-02-09 02:24:24.274528 | orchestrator | + destination_type = "volume" 2026-02-09 02:24:24.274535 | orchestrator | + multiattach = false 2026-02-09 02:24:24.274539 | orchestrator | + source_type = "volume" 2026-02-09 02:24:24.274543 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.274547 | orchestrator | } 2026-02-09 02:24:24.274550 | orchestrator | 2026-02-09 02:24:24.274554 | orchestrator | + network { 2026-02-09 02:24:24.274558 | orchestrator | + access_network = false 2026-02-09 02:24:24.274562 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-09 02:24:24.274565 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-09 02:24:24.274569 | orchestrator | + mac = (known after apply) 2026-02-09 02:24:24.274573 | orchestrator | + name = (known after apply) 2026-02-09 02:24:24.274576 | orchestrator | + port = (known after apply) 2026-02-09 02:24:24.274580 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.274584 | orchestrator | } 2026-02-09 02:24:24.274588 | orchestrator | } 2026-02-09 02:24:24.274745 | orchestrator | 2026-02-09 02:24:24.274754 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-09 02:24:24.274758 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-09 02:24:24.274762 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-09 02:24:24.274766 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-09 02:24:24.274770 | orchestrator | + all_metadata = (known after apply) 2026-02-09 02:24:24.274773 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.274777 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.274781 | orchestrator | + config_drive = true 2026-02-09 02:24:24.274785 | orchestrator | + created = (known after apply) 2026-02-09 02:24:24.274789 | orchestrator | + flavor_id = (known after apply) 2026-02-09 02:24:24.274792 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-09 02:24:24.274796 | orchestrator | + force_delete = false 2026-02-09 02:24:24.274800 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-09 02:24:24.274804 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.274808 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.274811 | orchestrator | + image_name = (known after apply) 2026-02-09 02:24:24.274815 | orchestrator | + key_pair = "testbed" 2026-02-09 02:24:24.274819 | orchestrator | + name = "testbed-node-4" 2026-02-09 02:24:24.274823 | orchestrator | + power_state = "active" 2026-02-09 02:24:24.274827 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.274830 | orchestrator | + security_groups = (known after apply) 2026-02-09 02:24:24.274834 | orchestrator | + stop_before_destroy = false 2026-02-09 02:24:24.274838 | orchestrator | + updated = (known after apply) 2026-02-09 02:24:24.274842 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-09 02:24:24.274846 | orchestrator | 2026-02-09 02:24:24.274850 | orchestrator | + block_device { 2026-02-09 02:24:24.274853 | orchestrator | + boot_index = 0 2026-02-09 02:24:24.274857 | orchestrator | + delete_on_termination = false 2026-02-09 02:24:24.274861 | orchestrator | + destination_type = "volume" 2026-02-09 02:24:24.274865 | orchestrator | + multiattach = false 2026-02-09 02:24:24.274868 | orchestrator | + source_type = "volume" 2026-02-09 02:24:24.274872 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.274876 | orchestrator | } 2026-02-09 02:24:24.274880 | orchestrator | 2026-02-09 02:24:24.274884 | orchestrator | + network { 2026-02-09 02:24:24.274887 | orchestrator | + access_network = false 2026-02-09 02:24:24.274891 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-09 02:24:24.274895 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-09 02:24:24.274899 | orchestrator | + mac = (known after apply) 2026-02-09 02:24:24.274903 | orchestrator | + name = (known after apply) 2026-02-09 02:24:24.274907 | orchestrator | + port = (known after apply) 2026-02-09 02:24:24.274910 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.274914 | orchestrator | } 2026-02-09 02:24:24.274918 | orchestrator | } 2026-02-09 02:24:24.275044 | orchestrator | 2026-02-09 02:24:24.275050 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-09 02:24:24.275054 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-09 02:24:24.275058 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-09 02:24:24.275062 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-09 02:24:24.275065 | orchestrator | + all_metadata = (known after apply) 2026-02-09 02:24:24.275069 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.275073 | orchestrator | + availability_zone = "nova" 2026-02-09 02:24:24.275077 | orchestrator | + config_drive = true 2026-02-09 02:24:24.275081 | orchestrator | + created = (known after apply) 2026-02-09 02:24:24.275084 | orchestrator | + flavor_id = (known after apply) 2026-02-09 02:24:24.275088 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-09 02:24:24.275092 | orchestrator | + force_delete = false 2026-02-09 02:24:24.275096 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-09 02:24:24.275099 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275103 | orchestrator | + image_id = (known after apply) 2026-02-09 02:24:24.275107 | orchestrator | + image_name = (known after apply) 2026-02-09 02:24:24.275111 | orchestrator | + key_pair = "testbed" 2026-02-09 02:24:24.275114 | orchestrator | + name = "testbed-node-5" 2026-02-09 02:24:24.275118 | orchestrator | + power_state = "active" 2026-02-09 02:24:24.275122 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275125 | orchestrator | + security_groups = (known after apply) 2026-02-09 02:24:24.275129 | orchestrator | + stop_before_destroy = false 2026-02-09 02:24:24.275133 | orchestrator | + updated = (known after apply) 2026-02-09 02:24:24.275137 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-09 02:24:24.275141 | orchestrator | 2026-02-09 02:24:24.275144 | orchestrator | + block_device { 2026-02-09 02:24:24.275171 | orchestrator | + boot_index = 0 2026-02-09 02:24:24.275176 | orchestrator | + delete_on_termination = false 2026-02-09 02:24:24.275180 | orchestrator | + destination_type = "volume" 2026-02-09 02:24:24.275183 | orchestrator | + multiattach = false 2026-02-09 02:24:24.275187 | orchestrator | + source_type = "volume" 2026-02-09 02:24:24.275191 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.275195 | orchestrator | } 2026-02-09 02:24:24.275198 | orchestrator | 2026-02-09 02:24:24.275202 | orchestrator | + network { 2026-02-09 02:24:24.275206 | orchestrator | + access_network = false 2026-02-09 02:24:24.275210 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-09 02:24:24.275214 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-09 02:24:24.275217 | orchestrator | + mac = (known after apply) 2026-02-09 02:24:24.275221 | orchestrator | + name = (known after apply) 2026-02-09 02:24:24.275225 | orchestrator | + port = (known after apply) 2026-02-09 02:24:24.275229 | orchestrator | + uuid = (known after apply) 2026-02-09 02:24:24.275233 | orchestrator | } 2026-02-09 02:24:24.275236 | orchestrator | } 2026-02-09 02:24:24.275242 | orchestrator | 2026-02-09 02:24:24.275246 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-09 02:24:24.275250 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-09 02:24:24.275254 | orchestrator | + fingerprint = (known after apply) 2026-02-09 02:24:24.275258 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275262 | orchestrator | + name = "testbed" 2026-02-09 02:24:24.275265 | orchestrator | + private_key = (sensitive value) 2026-02-09 02:24:24.275269 | orchestrator | + public_key = (known after apply) 2026-02-09 02:24:24.275273 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275276 | orchestrator | + user_id = (known after apply) 2026-02-09 02:24:24.275280 | orchestrator | } 2026-02-09 02:24:24.275284 | orchestrator | 2026-02-09 02:24:24.275288 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-09 02:24:24.275291 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-09 02:24:24.275301 | orchestrator | + device = (known after apply) 2026-02-09 02:24:24.275304 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275308 | orchestrator | + instance_id = (known after apply) 2026-02-09 02:24:24.275312 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275323 | orchestrator | + volume_id = (known after apply) 2026-02-09 02:24:24.275326 | orchestrator | } 2026-02-09 02:24:24.275330 | orchestrator | 2026-02-09 02:24:24.275334 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-09 02:24:24.275338 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-09 02:24:24.275341 | orchestrator | + device = (known after apply) 2026-02-09 02:24:24.275345 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275349 | orchestrator | + instance_id = (known after apply) 2026-02-09 02:24:24.275353 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275356 | orchestrator | + volume_id = (known after apply) 2026-02-09 02:24:24.275360 | orchestrator | } 2026-02-09 02:24:24.275365 | orchestrator | 2026-02-09 02:24:24.275369 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-09 02:24:24.275373 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-09 02:24:24.275377 | orchestrator | + device = (known after apply) 2026-02-09 02:24:24.275381 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275385 | orchestrator | + instance_id = (known after apply) 2026-02-09 02:24:24.275388 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275392 | orchestrator | + volume_id = (known after apply) 2026-02-09 02:24:24.275396 | orchestrator | } 2026-02-09 02:24:24.275400 | orchestrator | 2026-02-09 02:24:24.275404 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-09 02:24:24.275407 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-09 02:24:24.275411 | orchestrator | + device = (known after apply) 2026-02-09 02:24:24.275415 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275419 | orchestrator | + instance_id = (known after apply) 2026-02-09 02:24:24.275422 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275426 | orchestrator | + volume_id = (known after apply) 2026-02-09 02:24:24.275430 | orchestrator | } 2026-02-09 02:24:24.275434 | orchestrator | 2026-02-09 02:24:24.275438 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-09 02:24:24.275441 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-09 02:24:24.275445 | orchestrator | + device = (known after apply) 2026-02-09 02:24:24.275449 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275452 | orchestrator | + instance_id = (known after apply) 2026-02-09 02:24:24.275456 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275460 | orchestrator | + volume_id = (known after apply) 2026-02-09 02:24:24.275464 | orchestrator | } 2026-02-09 02:24:24.275599 | orchestrator | 2026-02-09 02:24:24.275605 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-09 02:24:24.275609 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-09 02:24:24.275613 | orchestrator | + device = (known after apply) 2026-02-09 02:24:24.275617 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275621 | orchestrator | + instance_id = (known after apply) 2026-02-09 02:24:24.275624 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275628 | orchestrator | + volume_id = (known after apply) 2026-02-09 02:24:24.275632 | orchestrator | } 2026-02-09 02:24:24.275667 | orchestrator | 2026-02-09 02:24:24.275673 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-09 02:24:24.275677 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-09 02:24:24.275680 | orchestrator | + device = (known after apply) 2026-02-09 02:24:24.275684 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275689 | orchestrator | + instance_id = (known after apply) 2026-02-09 02:24:24.275693 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275701 | orchestrator | + volume_id = (known after apply) 2026-02-09 02:24:24.275705 | orchestrator | } 2026-02-09 02:24:24.275711 | orchestrator | 2026-02-09 02:24:24.275715 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-09 02:24:24.275719 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-09 02:24:24.275722 | orchestrator | + device = (known after apply) 2026-02-09 02:24:24.275726 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275730 | orchestrator | + instance_id = (known after apply) 2026-02-09 02:24:24.275734 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275737 | orchestrator | + volume_id = (known after apply) 2026-02-09 02:24:24.275741 | orchestrator | } 2026-02-09 02:24:24.275746 | orchestrator | 2026-02-09 02:24:24.275750 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-09 02:24:24.275754 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-09 02:24:24.275758 | orchestrator | + device = (known after apply) 2026-02-09 02:24:24.275762 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275765 | orchestrator | + instance_id = (known after apply) 2026-02-09 02:24:24.275769 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275773 | orchestrator | + volume_id = (known after apply) 2026-02-09 02:24:24.275776 | orchestrator | } 2026-02-09 02:24:24.275809 | orchestrator | 2026-02-09 02:24:24.275814 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-09 02:24:24.275819 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-09 02:24:24.275823 | orchestrator | + fixed_ip = (known after apply) 2026-02-09 02:24:24.275826 | orchestrator | + floating_ip = (known after apply) 2026-02-09 02:24:24.275830 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275834 | orchestrator | + port_id = (known after apply) 2026-02-09 02:24:24.275838 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275842 | orchestrator | } 2026-02-09 02:24:24.275914 | orchestrator | 2026-02-09 02:24:24.275920 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-09 02:24:24.275924 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-09 02:24:24.275928 | orchestrator | + address = (known after apply) 2026-02-09 02:24:24.275932 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.275939 | orchestrator | + dns_domain = (known after apply) 2026-02-09 02:24:24.275943 | orchestrator | + dns_name = (known after apply) 2026-02-09 02:24:24.275947 | orchestrator | + fixed_ip = (known after apply) 2026-02-09 02:24:24.275950 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.275954 | orchestrator | + pool = "public" 2026-02-09 02:24:24.275958 | orchestrator | + port_id = (known after apply) 2026-02-09 02:24:24.275962 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.275965 | orchestrator | + subnet_id = (known after apply) 2026-02-09 02:24:24.275969 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.275973 | orchestrator | } 2026-02-09 02:24:24.276063 | orchestrator | 2026-02-09 02:24:24.276069 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-09 02:24:24.276072 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-09 02:24:24.276076 | orchestrator | + admin_state_up = (known after apply) 2026-02-09 02:24:24.276080 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.276084 | orchestrator | + availability_zone_hints = [ 2026-02-09 02:24:24.276088 | orchestrator | + "nova", 2026-02-09 02:24:24.276092 | orchestrator | ] 2026-02-09 02:24:24.276096 | orchestrator | + dns_domain = (known after apply) 2026-02-09 02:24:24.276100 | orchestrator | + external = (known after apply) 2026-02-09 02:24:24.276103 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.276107 | orchestrator | + mtu = (known after apply) 2026-02-09 02:24:24.276111 | orchestrator | + name = "net-testbed-management" 2026-02-09 02:24:24.276115 | orchestrator | + port_security_enabled = (known after apply) 2026-02-09 02:24:24.276126 | orchestrator | + qos_policy_id = (known after apply) 2026-02-09 02:24:24.276130 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.276134 | orchestrator | + shared = (known after apply) 2026-02-09 02:24:24.276138 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.276141 | orchestrator | + transparent_vlan = (known after apply) 2026-02-09 02:24:24.276145 | orchestrator | 2026-02-09 02:24:24.276161 | orchestrator | + segments (known after apply) 2026-02-09 02:24:24.276165 | orchestrator | } 2026-02-09 02:24:24.276283 | orchestrator | 2026-02-09 02:24:24.276289 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-09 02:24:24.276293 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-09 02:24:24.276297 | orchestrator | + admin_state_up = (known after apply) 2026-02-09 02:24:24.276301 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-09 02:24:24.276305 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-09 02:24:24.276308 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.276312 | orchestrator | + device_id = (known after apply) 2026-02-09 02:24:24.276316 | orchestrator | + device_owner = (known after apply) 2026-02-09 02:24:24.276319 | orchestrator | + dns_assignment = (known after apply) 2026-02-09 02:24:24.276323 | orchestrator | + dns_name = (known after apply) 2026-02-09 02:24:24.276327 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.276331 | orchestrator | + mac_address = (known after apply) 2026-02-09 02:24:24.276334 | orchestrator | + network_id = (known after apply) 2026-02-09 02:24:24.276338 | orchestrator | + port_security_enabled = (known after apply) 2026-02-09 02:24:24.276342 | orchestrator | + qos_policy_id = (known after apply) 2026-02-09 02:24:24.276345 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.276349 | orchestrator | + security_group_ids = (known after apply) 2026-02-09 02:24:24.276353 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.276356 | orchestrator | 2026-02-09 02:24:24.276360 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.276364 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-09 02:24:24.276368 | orchestrator | } 2026-02-09 02:24:24.276372 | orchestrator | 2026-02-09 02:24:24.276375 | orchestrator | + binding (known after apply) 2026-02-09 02:24:24.276379 | orchestrator | 2026-02-09 02:24:24.276383 | orchestrator | + fixed_ip { 2026-02-09 02:24:24.276387 | orchestrator | + ip_address = "192.168.16.5" 2026-02-09 02:24:24.276391 | orchestrator | + subnet_id = (known after apply) 2026-02-09 02:24:24.276394 | orchestrator | } 2026-02-09 02:24:24.276398 | orchestrator | } 2026-02-09 02:24:24.276539 | orchestrator | 2026-02-09 02:24:24.276545 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-09 02:24:24.276549 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-09 02:24:24.276553 | orchestrator | + admin_state_up = (known after apply) 2026-02-09 02:24:24.276557 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-09 02:24:24.276560 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-09 02:24:24.276564 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.276568 | orchestrator | + device_id = (known after apply) 2026-02-09 02:24:24.276572 | orchestrator | + device_owner = (known after apply) 2026-02-09 02:24:24.276576 | orchestrator | + dns_assignment = (known after apply) 2026-02-09 02:24:24.276579 | orchestrator | + dns_name = (known after apply) 2026-02-09 02:24:24.276583 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.276587 | orchestrator | + mac_address = (known after apply) 2026-02-09 02:24:24.276590 | orchestrator | + network_id = (known after apply) 2026-02-09 02:24:24.276594 | orchestrator | + port_security_enabled = (known after apply) 2026-02-09 02:24:24.276598 | orchestrator | + qos_policy_id = (known after apply) 2026-02-09 02:24:24.276602 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.276610 | orchestrator | + security_group_ids = (known after apply) 2026-02-09 02:24:24.276613 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.276617 | orchestrator | 2026-02-09 02:24:24.276621 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.276625 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-09 02:24:24.276628 | orchestrator | } 2026-02-09 02:24:24.276632 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.276636 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-09 02:24:24.276640 | orchestrator | } 2026-02-09 02:24:24.276644 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.276647 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-09 02:24:24.276651 | orchestrator | } 2026-02-09 02:24:24.276655 | orchestrator | 2026-02-09 02:24:24.276658 | orchestrator | + binding (known after apply) 2026-02-09 02:24:24.276662 | orchestrator | 2026-02-09 02:24:24.276666 | orchestrator | + fixed_ip { 2026-02-09 02:24:24.276670 | orchestrator | + ip_address = "192.168.16.10" 2026-02-09 02:24:24.276673 | orchestrator | + subnet_id = (known after apply) 2026-02-09 02:24:24.276677 | orchestrator | } 2026-02-09 02:24:24.276681 | orchestrator | } 2026-02-09 02:24:24.276929 | orchestrator | 2026-02-09 02:24:24.276936 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-09 02:24:24.276940 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-09 02:24:24.276948 | orchestrator | + admin_state_up = (known after apply) 2026-02-09 02:24:24.276952 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-09 02:24:24.276956 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-09 02:24:24.276959 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.276963 | orchestrator | + device_id = (known after apply) 2026-02-09 02:24:24.276967 | orchestrator | + device_owner = (known after apply) 2026-02-09 02:24:24.276971 | orchestrator | + dns_assignment = (known after apply) 2026-02-09 02:24:24.276974 | orchestrator | + dns_name = (known after apply) 2026-02-09 02:24:24.276978 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.276982 | orchestrator | + mac_address = (known after apply) 2026-02-09 02:24:24.276986 | orchestrator | + network_id = (known after apply) 2026-02-09 02:24:24.276989 | orchestrator | + port_security_enabled = (known after apply) 2026-02-09 02:24:24.276993 | orchestrator | + qos_policy_id = (known after apply) 2026-02-09 02:24:24.276997 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.277001 | orchestrator | + security_group_ids = (known after apply) 2026-02-09 02:24:24.277004 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.277008 | orchestrator | 2026-02-09 02:24:24.277012 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277015 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-09 02:24:24.277019 | orchestrator | } 2026-02-09 02:24:24.277023 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277027 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-09 02:24:24.277030 | orchestrator | } 2026-02-09 02:24:24.277034 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277038 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-09 02:24:24.277041 | orchestrator | } 2026-02-09 02:24:24.277045 | orchestrator | 2026-02-09 02:24:24.277049 | orchestrator | + binding (known after apply) 2026-02-09 02:24:24.277053 | orchestrator | 2026-02-09 02:24:24.277056 | orchestrator | + fixed_ip { 2026-02-09 02:24:24.277060 | orchestrator | + ip_address = "192.168.16.11" 2026-02-09 02:24:24.277064 | orchestrator | + subnet_id = (known after apply) 2026-02-09 02:24:24.277068 | orchestrator | } 2026-02-09 02:24:24.277071 | orchestrator | } 2026-02-09 02:24:24.277229 | orchestrator | 2026-02-09 02:24:24.277240 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-09 02:24:24.277244 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-09 02:24:24.277248 | orchestrator | + admin_state_up = (known after apply) 2026-02-09 02:24:24.277251 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-09 02:24:24.277255 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-09 02:24:24.277259 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.277268 | orchestrator | + device_id = (known after apply) 2026-02-09 02:24:24.277272 | orchestrator | + device_owner = (known after apply) 2026-02-09 02:24:24.277275 | orchestrator | + dns_assignment = (known after apply) 2026-02-09 02:24:24.277279 | orchestrator | + dns_name = (known after apply) 2026-02-09 02:24:24.277283 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.277286 | orchestrator | + mac_address = (known after apply) 2026-02-09 02:24:24.277290 | orchestrator | + network_id = (known after apply) 2026-02-09 02:24:24.277294 | orchestrator | + port_security_enabled = (known after apply) 2026-02-09 02:24:24.277298 | orchestrator | + qos_policy_id = (known after apply) 2026-02-09 02:24:24.277302 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.277305 | orchestrator | + security_group_ids = (known after apply) 2026-02-09 02:24:24.277309 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.277313 | orchestrator | 2026-02-09 02:24:24.277316 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277320 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-09 02:24:24.277324 | orchestrator | } 2026-02-09 02:24:24.277328 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277331 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-09 02:24:24.277335 | orchestrator | } 2026-02-09 02:24:24.277339 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277343 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-09 02:24:24.277346 | orchestrator | } 2026-02-09 02:24:24.277350 | orchestrator | 2026-02-09 02:24:24.277354 | orchestrator | + binding (known after apply) 2026-02-09 02:24:24.277357 | orchestrator | 2026-02-09 02:24:24.277361 | orchestrator | + fixed_ip { 2026-02-09 02:24:24.277365 | orchestrator | + ip_address = "192.168.16.12" 2026-02-09 02:24:24.277369 | orchestrator | + subnet_id = (known after apply) 2026-02-09 02:24:24.277373 | orchestrator | } 2026-02-09 02:24:24.277376 | orchestrator | } 2026-02-09 02:24:24.277447 | orchestrator | 2026-02-09 02:24:24.277452 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-09 02:24:24.277456 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-09 02:24:24.277460 | orchestrator | + admin_state_up = (known after apply) 2026-02-09 02:24:24.277464 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-09 02:24:24.277468 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-09 02:24:24.277472 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.277475 | orchestrator | + device_id = (known after apply) 2026-02-09 02:24:24.277479 | orchestrator | + device_owner = (known after apply) 2026-02-09 02:24:24.277483 | orchestrator | + dns_assignment = (known after apply) 2026-02-09 02:24:24.277487 | orchestrator | + dns_name = (known after apply) 2026-02-09 02:24:24.277491 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.277494 | orchestrator | + mac_address = (known after apply) 2026-02-09 02:24:24.277499 | orchestrator | + network_id = (known after apply) 2026-02-09 02:24:24.277503 | orchestrator | + port_security_enabled = (known after apply) 2026-02-09 02:24:24.277507 | orchestrator | + qos_policy_id = (known after apply) 2026-02-09 02:24:24.277510 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.277514 | orchestrator | + security_group_ids = (known after apply) 2026-02-09 02:24:24.277518 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.277522 | orchestrator | 2026-02-09 02:24:24.277526 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277530 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-09 02:24:24.277533 | orchestrator | } 2026-02-09 02:24:24.277537 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277541 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-09 02:24:24.277545 | orchestrator | } 2026-02-09 02:24:24.277549 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277553 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-09 02:24:24.277557 | orchestrator | } 2026-02-09 02:24:24.277561 | orchestrator | 2026-02-09 02:24:24.277573 | orchestrator | + binding (known after apply) 2026-02-09 02:24:24.277580 | orchestrator | 2026-02-09 02:24:24.277586 | orchestrator | + fixed_ip { 2026-02-09 02:24:24.277593 | orchestrator | + ip_address = "192.168.16.13" 2026-02-09 02:24:24.277599 | orchestrator | + subnet_id = (known after apply) 2026-02-09 02:24:24.277609 | orchestrator | } 2026-02-09 02:24:24.277617 | orchestrator | } 2026-02-09 02:24:24.277626 | orchestrator | 2026-02-09 02:24:24.277633 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-09 02:24:24.277640 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-09 02:24:24.277646 | orchestrator | + admin_state_up = (known after apply) 2026-02-09 02:24:24.277653 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-09 02:24:24.277660 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-09 02:24:24.277667 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.277673 | orchestrator | + device_id = (known after apply) 2026-02-09 02:24:24.277679 | orchestrator | + device_owner = (known after apply) 2026-02-09 02:24:24.277686 | orchestrator | + dns_assignment = (known after apply) 2026-02-09 02:24:24.277693 | orchestrator | + dns_name = (known after apply) 2026-02-09 02:24:24.277707 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.277714 | orchestrator | + mac_address = (known after apply) 2026-02-09 02:24:24.277720 | orchestrator | + network_id = (known after apply) 2026-02-09 02:24:24.277727 | orchestrator | + port_security_enabled = (known after apply) 2026-02-09 02:24:24.277733 | orchestrator | + qos_policy_id = (known after apply) 2026-02-09 02:24:24.277739 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.277745 | orchestrator | + security_group_ids = (known after apply) 2026-02-09 02:24:24.277752 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.277759 | orchestrator | 2026-02-09 02:24:24.277765 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277775 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-09 02:24:24.277783 | orchestrator | } 2026-02-09 02:24:24.277790 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277797 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-09 02:24:24.277802 | orchestrator | } 2026-02-09 02:24:24.277806 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.277810 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-09 02:24:24.277814 | orchestrator | } 2026-02-09 02:24:24.277818 | orchestrator | 2026-02-09 02:24:24.277822 | orchestrator | + binding (known after apply) 2026-02-09 02:24:24.277825 | orchestrator | 2026-02-09 02:24:24.277829 | orchestrator | + fixed_ip { 2026-02-09 02:24:24.277833 | orchestrator | + ip_address = "192.168.16.14" 2026-02-09 02:24:24.277837 | orchestrator | + subnet_id = (known after apply) 2026-02-09 02:24:24.277841 | orchestrator | } 2026-02-09 02:24:24.277845 | orchestrator | } 2026-02-09 02:24:24.277935 | orchestrator | 2026-02-09 02:24:24.277945 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-09 02:24:24.277949 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-09 02:24:24.277953 | orchestrator | + admin_state_up = (known after apply) 2026-02-09 02:24:24.277957 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-09 02:24:24.277960 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-09 02:24:24.277964 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.277968 | orchestrator | + device_id = (known after apply) 2026-02-09 02:24:24.277972 | orchestrator | + device_owner = (known after apply) 2026-02-09 02:24:24.277976 | orchestrator | + dns_assignment = (known after apply) 2026-02-09 02:24:24.277979 | orchestrator | + dns_name = (known after apply) 2026-02-09 02:24:24.277983 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.277987 | orchestrator | + mac_address = (known after apply) 2026-02-09 02:24:24.277991 | orchestrator | + network_id = (known after apply) 2026-02-09 02:24:24.277995 | orchestrator | + port_security_enabled = (known after apply) 2026-02-09 02:24:24.277999 | orchestrator | + qos_policy_id = (known after apply) 2026-02-09 02:24:24.278008 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278035 | orchestrator | + security_group_ids = (known after apply) 2026-02-09 02:24:24.278041 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278046 | orchestrator | 2026-02-09 02:24:24.278053 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.278059 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-09 02:24:24.278069 | orchestrator | } 2026-02-09 02:24:24.278076 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.278083 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-09 02:24:24.278090 | orchestrator | } 2026-02-09 02:24:24.278097 | orchestrator | + allowed_address_pairs { 2026-02-09 02:24:24.278104 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-09 02:24:24.278111 | orchestrator | } 2026-02-09 02:24:24.278118 | orchestrator | 2026-02-09 02:24:24.278125 | orchestrator | + binding (known after apply) 2026-02-09 02:24:24.278132 | orchestrator | 2026-02-09 02:24:24.278138 | orchestrator | + fixed_ip { 2026-02-09 02:24:24.278145 | orchestrator | + ip_address = "192.168.16.15" 2026-02-09 02:24:24.278190 | orchestrator | + subnet_id = (known after apply) 2026-02-09 02:24:24.278197 | orchestrator | } 2026-02-09 02:24:24.278203 | orchestrator | } 2026-02-09 02:24:24.278215 | orchestrator | 2026-02-09 02:24:24.278220 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-09 02:24:24.278224 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-09 02:24:24.278228 | orchestrator | + force_destroy = false 2026-02-09 02:24:24.278231 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278235 | orchestrator | + port_id = (known after apply) 2026-02-09 02:24:24.278239 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278243 | orchestrator | + router_id = (known after apply) 2026-02-09 02:24:24.278246 | orchestrator | + subnet_id = (known after apply) 2026-02-09 02:24:24.278250 | orchestrator | } 2026-02-09 02:24:24.278254 | orchestrator | 2026-02-09 02:24:24.278257 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-09 02:24:24.278261 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-09 02:24:24.278265 | orchestrator | + admin_state_up = (known after apply) 2026-02-09 02:24:24.278269 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.278273 | orchestrator | + availability_zone_hints = [ 2026-02-09 02:24:24.278276 | orchestrator | + "nova", 2026-02-09 02:24:24.278280 | orchestrator | ] 2026-02-09 02:24:24.278284 | orchestrator | + distributed = (known after apply) 2026-02-09 02:24:24.278288 | orchestrator | + enable_snat = (known after apply) 2026-02-09 02:24:24.278291 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-09 02:24:24.278295 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-09 02:24:24.278299 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278303 | orchestrator | + name = "testbed" 2026-02-09 02:24:24.278306 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278310 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278314 | orchestrator | 2026-02-09 02:24:24.278318 | orchestrator | + external_fixed_ip (known after apply) 2026-02-09 02:24:24.278321 | orchestrator | } 2026-02-09 02:24:24.278325 | orchestrator | 2026-02-09 02:24:24.278329 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-09 02:24:24.278334 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-09 02:24:24.278338 | orchestrator | + description = "ssh" 2026-02-09 02:24:24.278342 | orchestrator | + direction = "ingress" 2026-02-09 02:24:24.278345 | orchestrator | + ethertype = "IPv4" 2026-02-09 02:24:24.278349 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278353 | orchestrator | + port_range_max = 22 2026-02-09 02:24:24.278357 | orchestrator | + port_range_min = 22 2026-02-09 02:24:24.278361 | orchestrator | + protocol = "tcp" 2026-02-09 02:24:24.278364 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278374 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-09 02:24:24.278378 | orchestrator | + remote_group_id = (known after apply) 2026-02-09 02:24:24.278381 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-09 02:24:24.278385 | orchestrator | + security_group_id = (known after apply) 2026-02-09 02:24:24.278389 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278393 | orchestrator | } 2026-02-09 02:24:24.278398 | orchestrator | 2026-02-09 02:24:24.278402 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-09 02:24:24.278406 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-09 02:24:24.278410 | orchestrator | + description = "wireguard" 2026-02-09 02:24:24.278414 | orchestrator | + direction = "ingress" 2026-02-09 02:24:24.278417 | orchestrator | + ethertype = "IPv4" 2026-02-09 02:24:24.278421 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278425 | orchestrator | + port_range_max = 51820 2026-02-09 02:24:24.278429 | orchestrator | + port_range_min = 51820 2026-02-09 02:24:24.278432 | orchestrator | + protocol = "udp" 2026-02-09 02:24:24.278436 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278440 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-09 02:24:24.278444 | orchestrator | + remote_group_id = (known after apply) 2026-02-09 02:24:24.278448 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-09 02:24:24.278451 | orchestrator | + security_group_id = (known after apply) 2026-02-09 02:24:24.278455 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278459 | orchestrator | } 2026-02-09 02:24:24.278463 | orchestrator | 2026-02-09 02:24:24.278466 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-09 02:24:24.278470 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-09 02:24:24.278478 | orchestrator | + direction = "ingress" 2026-02-09 02:24:24.278482 | orchestrator | + ethertype = "IPv4" 2026-02-09 02:24:24.278486 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278490 | orchestrator | + protocol = "tcp" 2026-02-09 02:24:24.278493 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278497 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-09 02:24:24.278501 | orchestrator | + remote_group_id = (known after apply) 2026-02-09 02:24:24.278504 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-09 02:24:24.278508 | orchestrator | + security_group_id = (known after apply) 2026-02-09 02:24:24.278512 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278516 | orchestrator | } 2026-02-09 02:24:24.278519 | orchestrator | 2026-02-09 02:24:24.278523 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-09 02:24:24.278527 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-09 02:24:24.278531 | orchestrator | + direction = "ingress" 2026-02-09 02:24:24.278535 | orchestrator | + ethertype = "IPv4" 2026-02-09 02:24:24.278539 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278542 | orchestrator | + protocol = "udp" 2026-02-09 02:24:24.278546 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278550 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-09 02:24:24.278554 | orchestrator | + remote_group_id = (known after apply) 2026-02-09 02:24:24.278558 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-09 02:24:24.278561 | orchestrator | + security_group_id = (known after apply) 2026-02-09 02:24:24.278565 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278569 | orchestrator | } 2026-02-09 02:24:24.278574 | orchestrator | 2026-02-09 02:24:24.278578 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-09 02:24:24.278586 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-09 02:24:24.278590 | orchestrator | + direction = "ingress" 2026-02-09 02:24:24.278594 | orchestrator | + ethertype = "IPv4" 2026-02-09 02:24:24.278597 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278601 | orchestrator | + protocol = "icmp" 2026-02-09 02:24:24.278605 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278609 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-09 02:24:24.278612 | orchestrator | + remote_group_id = (known after apply) 2026-02-09 02:24:24.278616 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-09 02:24:24.278620 | orchestrator | + security_group_id = (known after apply) 2026-02-09 02:24:24.278624 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278628 | orchestrator | } 2026-02-09 02:24:24.278632 | orchestrator | 2026-02-09 02:24:24.278636 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-09 02:24:24.278640 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-09 02:24:24.278643 | orchestrator | + direction = "ingress" 2026-02-09 02:24:24.278647 | orchestrator | + ethertype = "IPv4" 2026-02-09 02:24:24.278651 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278655 | orchestrator | + protocol = "tcp" 2026-02-09 02:24:24.278659 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278662 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-09 02:24:24.278666 | orchestrator | + remote_group_id = (known after apply) 2026-02-09 02:24:24.278670 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-09 02:24:24.278673 | orchestrator | + security_group_id = (known after apply) 2026-02-09 02:24:24.278677 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278681 | orchestrator | } 2026-02-09 02:24:24.278685 | orchestrator | 2026-02-09 02:24:24.278689 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-09 02:24:24.278692 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-09 02:24:24.278696 | orchestrator | + direction = "ingress" 2026-02-09 02:24:24.278700 | orchestrator | + ethertype = "IPv4" 2026-02-09 02:24:24.278703 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278707 | orchestrator | + protocol = "udp" 2026-02-09 02:24:24.278711 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278715 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-09 02:24:24.278719 | orchestrator | + remote_group_id = (known after apply) 2026-02-09 02:24:24.278722 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-09 02:24:24.278726 | orchestrator | + security_group_id = (known after apply) 2026-02-09 02:24:24.278730 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278734 | orchestrator | } 2026-02-09 02:24:24.278740 | orchestrator | 2026-02-09 02:24:24.278743 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-09 02:24:24.278747 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-09 02:24:24.278751 | orchestrator | + direction = "ingress" 2026-02-09 02:24:24.278755 | orchestrator | + ethertype = "IPv4" 2026-02-09 02:24:24.278758 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278762 | orchestrator | + protocol = "icmp" 2026-02-09 02:24:24.278766 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278770 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-09 02:24:24.278774 | orchestrator | + remote_group_id = (known after apply) 2026-02-09 02:24:24.278777 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-09 02:24:24.278781 | orchestrator | + security_group_id = (known after apply) 2026-02-09 02:24:24.278785 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278792 | orchestrator | } 2026-02-09 02:24:24.278796 | orchestrator | 2026-02-09 02:24:24.278800 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-09 02:24:24.278804 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-09 02:24:24.278808 | orchestrator | + description = "vrrp" 2026-02-09 02:24:24.278811 | orchestrator | + direction = "ingress" 2026-02-09 02:24:24.278815 | orchestrator | + ethertype = "IPv4" 2026-02-09 02:24:24.278819 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278823 | orchestrator | + protocol = "112" 2026-02-09 02:24:24.278826 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278830 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-09 02:24:24.278834 | orchestrator | + remote_group_id = (known after apply) 2026-02-09 02:24:24.278837 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-09 02:24:24.278841 | orchestrator | + security_group_id = (known after apply) 2026-02-09 02:24:24.278845 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278849 | orchestrator | } 2026-02-09 02:24:24.278853 | orchestrator | 2026-02-09 02:24:24.278857 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-09 02:24:24.278860 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-09 02:24:24.278864 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.278868 | orchestrator | + description = "management security group" 2026-02-09 02:24:24.278872 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278876 | orchestrator | + name = "testbed-management" 2026-02-09 02:24:24.278880 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278883 | orchestrator | + stateful = (known after apply) 2026-02-09 02:24:24.278887 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278891 | orchestrator | } 2026-02-09 02:24:24.278895 | orchestrator | 2026-02-09 02:24:24.278899 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-09 02:24:24.278903 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-09 02:24:24.278906 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.278910 | orchestrator | + description = "node security group" 2026-02-09 02:24:24.278914 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278917 | orchestrator | + name = "testbed-node" 2026-02-09 02:24:24.278921 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.278925 | orchestrator | + stateful = (known after apply) 2026-02-09 02:24:24.278929 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.278932 | orchestrator | } 2026-02-09 02:24:24.278938 | orchestrator | 2026-02-09 02:24:24.278942 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-09 02:24:24.278946 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-09 02:24:24.278949 | orchestrator | + all_tags = (known after apply) 2026-02-09 02:24:24.278953 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-09 02:24:24.278957 | orchestrator | + dns_nameservers = [ 2026-02-09 02:24:24.278961 | orchestrator | + "8.8.8.8", 2026-02-09 02:24:24.278965 | orchestrator | + "9.9.9.9", 2026-02-09 02:24:24.278969 | orchestrator | ] 2026-02-09 02:24:24.278973 | orchestrator | + enable_dhcp = true 2026-02-09 02:24:24.278977 | orchestrator | + gateway_ip = (known after apply) 2026-02-09 02:24:24.278983 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.278987 | orchestrator | + ip_version = 4 2026-02-09 02:24:24.278991 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-09 02:24:24.278995 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-09 02:24:24.278999 | orchestrator | + name = "subnet-testbed-management" 2026-02-09 02:24:24.279003 | orchestrator | + network_id = (known after apply) 2026-02-09 02:24:24.279006 | orchestrator | + no_gateway = false 2026-02-09 02:24:24.279010 | orchestrator | + region = (known after apply) 2026-02-09 02:24:24.279014 | orchestrator | + service_types = (known after apply) 2026-02-09 02:24:24.279023 | orchestrator | + tenant_id = (known after apply) 2026-02-09 02:24:24.279029 | orchestrator | 2026-02-09 02:24:24.279036 | orchestrator | + allocation_pool { 2026-02-09 02:24:24.279042 | orchestrator | + end = "192.168.31.250" 2026-02-09 02:24:24.279047 | orchestrator | + start = "192.168.31.200" 2026-02-09 02:24:24.279053 | orchestrator | } 2026-02-09 02:24:24.279059 | orchestrator | } 2026-02-09 02:24:24.279065 | orchestrator | 2026-02-09 02:24:24.279071 | orchestrator | # terraform_data.image will be created 2026-02-09 02:24:24.279077 | orchestrator | + resource "terraform_data" "image" { 2026-02-09 02:24:24.279083 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.279089 | orchestrator | + input = "Ubuntu 24.04" 2026-02-09 02:24:24.279094 | orchestrator | + output = (known after apply) 2026-02-09 02:24:24.279100 | orchestrator | } 2026-02-09 02:24:24.279106 | orchestrator | 2026-02-09 02:24:24.279112 | orchestrator | # terraform_data.image_node will be created 2026-02-09 02:24:24.279118 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-09 02:24:24.279123 | orchestrator | + id = (known after apply) 2026-02-09 02:24:24.279128 | orchestrator | + input = "Ubuntu 24.04" 2026-02-09 02:24:24.279134 | orchestrator | + output = (known after apply) 2026-02-09 02:24:24.279139 | orchestrator | } 2026-02-09 02:24:24.279145 | orchestrator | 2026-02-09 02:24:24.279166 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-09 02:24:24.279173 | orchestrator | 2026-02-09 02:24:24.279179 | orchestrator | Changes to Outputs: 2026-02-09 02:24:24.279184 | orchestrator | + manager_address = (sensitive value) 2026-02-09 02:24:24.279190 | orchestrator | + private_key = (sensitive value) 2026-02-09 02:24:24.520244 | orchestrator | terraform_data.image: Creating... 2026-02-09 02:24:24.521814 | orchestrator | terraform_data.image: Creation complete after 0s [id=9d74cc40-db9b-96a2-1c20-f1fd26e28f42] 2026-02-09 02:24:24.521854 | orchestrator | terraform_data.image_node: Creating... 2026-02-09 02:24:24.521862 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=a8755073-6269-ead7-1769-473fe069ef49] 2026-02-09 02:24:24.545322 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-09 02:24:24.545625 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-09 02:24:24.548871 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-09 02:24:24.549721 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-09 02:24:24.550071 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-09 02:24:24.550093 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-09 02:24:24.558123 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-09 02:24:24.561416 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-09 02:24:24.564205 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-09 02:24:24.568034 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-09 02:24:25.004520 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-09 02:24:25.010417 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-09 02:24:25.016323 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-09 02:24:25.021758 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-09 02:24:25.068062 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-02-09 02:24:25.073373 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-09 02:24:25.663893 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=79e2440a-8852-44cc-8d34-dbc8671450c6] 2026-02-09 02:24:25.672081 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-09 02:24:28.176051 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=31e706da-f17a-4e24-9ea1-628640491509] 2026-02-09 02:24:28.181554 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-09 02:24:28.182952 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=aca63f30-83ce-4e61-8910-3b8ba5d1369c] 2026-02-09 02:24:28.186568 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0] 2026-02-09 02:24:28.194959 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-09 02:24:28.197059 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-09 02:24:28.206123 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=96ef4066-b91b-4665-8e67-19d3f9b9c2aa] 2026-02-09 02:24:28.206986 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=2778285ba9861a092396d19781841cb5ac86a11a] 2026-02-09 02:24:28.208513 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=e6e78f5c-a05f-4a2f-8630-adfade66484d] 2026-02-09 02:24:28.210058 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-09 02:24:28.212218 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-09 02:24:28.212753 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-09 02:24:28.226646 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=1815f4db-c191-49bf-971c-f1dbc8705b46] 2026-02-09 02:24:28.232980 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-09 02:24:28.275515 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=ad4d2000-db3f-4cfd-be49-267ba7004717] 2026-02-09 02:24:28.284469 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-09 02:24:28.288756 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=04e8f271-95dc-41c9-84a5-801ade107da4] 2026-02-09 02:24:28.293515 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=f8c39c36f67c1314cdd70252bedff1ba23d46bbc] 2026-02-09 02:24:28.293590 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=accd83ee-77ec-4f4c-88d5-19cec15f3e24] 2026-02-09 02:24:28.297366 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-09 02:24:29.004213 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=07b5cadf-5aeb-4e31-9bf7-fe940ba942fa] 2026-02-09 02:24:29.627584 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 2s [id=64c5508f-ea7b-4ebd-9bad-1cf5583b5390] 2026-02-09 02:24:29.633279 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-09 02:24:31.555768 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=669d190d-3883-4e68-b86c-8247f53b6ca7] 2026-02-09 02:24:31.591509 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=e9ffd840-8794-4a3d-8eb0-6a90290484dd] 2026-02-09 02:24:31.603457 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=f810d870-b1b5-47b5-8aca-c0a0a7072d9d] 2026-02-09 02:24:31.606655 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=e53c6ccf-ffc4-4947-a04a-5ba76f724671] 2026-02-09 02:24:31.626885 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=62fae712-754c-4f2b-a4e9-8035d76f7af8] 2026-02-09 02:24:32.207361 | orchestrator | openstack_networking_router_v2.router: Creation complete after 2s [id=8e41c5fa-e219-49a5-bb0b-cbfa77ab6ec4] 2026-02-09 02:24:32.211417 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-09 02:24:32.212143 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-09 02:24:32.214490 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-09 02:24:32.341729 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=05884397-9613-4241-8546-48042913fb5f] 2026-02-09 02:24:32.422734 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=1d59759c-c775-484e-82b9-d1d8b59ecabc] 2026-02-09 02:24:32.428961 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-09 02:24:32.429675 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-09 02:24:32.429897 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-09 02:24:32.432706 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-09 02:24:32.434217 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-09 02:24:32.435577 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-09 02:24:32.450102 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-09 02:24:32.450281 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-09 02:24:32.450297 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=697a86b2-499a-49e8-a92f-11e14b8ba339] 2026-02-09 02:24:32.455589 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-09 02:24:32.628533 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=b1f76ab2-a4d1-405b-a58d-0545dc9bf328] 2026-02-09 02:24:32.636157 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-09 02:24:32.862649 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=73554ce6-87db-4f3e-9220-96f2e4127cd9] 2026-02-09 02:24:32.867718 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-09 02:24:33.032359 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=3c5e5293-6ad9-4cf4-856b-cf840925d5c6] 2026-02-09 02:24:33.044583 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-09 02:24:33.075653 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=727bf1d1-0a76-486a-9553-a4334783c16a] 2026-02-09 02:24:33.080198 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-09 02:24:33.099710 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=9f9274af-c2fa-4fc5-860f-42b6c2963119] 2026-02-09 02:24:33.104657 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-09 02:24:33.106150 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=410bd866-1bf7-4777-91d0-5185a802e328] 2026-02-09 02:24:33.110775 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-09 02:24:33.134463 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=7adecf51-ad28-4d61-acdf-e50765e97f99] 2026-02-09 02:24:33.139120 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-09 02:24:33.148976 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=4b7adcdf-254d-4fc6-bcb9-36dd452d606e] 2026-02-09 02:24:33.258905 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=14b392e0-d81c-416d-bdc3-249a6e87450a] 2026-02-09 02:24:33.260926 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=b103606c-23ad-4651-a4d8-c4730e315a63] 2026-02-09 02:24:33.298544 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 0s [id=0f7f80b7-4474-4ee3-aa02-396ee794cc9d] 2026-02-09 02:24:33.416025 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=a546779b-1845-432a-a77f-39c50e2de73a] 2026-02-09 02:24:33.588992 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=6e45bf4d-f508-4197-9f6e-53440d1667f8] 2026-02-09 02:24:33.632134 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=9e93e59b-ba0e-4319-a5f6-702cf88df7b4] 2026-02-09 02:24:33.668713 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=b27aed70-03c2-4438-9820-b2e9af588aff] 2026-02-09 02:24:33.804259 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=3bb48cc4-72a1-4ff8-ada2-f695626dbf13] 2026-02-09 02:24:34.819159 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=34790658-4446-4aa8-b630-87ef222f7937] 2026-02-09 02:24:34.839350 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-09 02:24:34.847318 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-09 02:24:34.853163 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-09 02:24:34.860613 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-09 02:24:34.871067 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-09 02:24:34.874416 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-09 02:24:34.886488 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-09 02:24:36.280195 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=73274e1b-5fc6-418f-9bc0-95fc2f0a355e] 2026-02-09 02:24:36.286548 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-09 02:24:36.291341 | orchestrator | local_file.inventory: Creating... 2026-02-09 02:24:36.291706 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-09 02:24:36.296500 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=805707f3e0b370ccb86f1e3367c4f0f3752a02be] 2026-02-09 02:24:36.296563 | orchestrator | local_file.inventory: Creation complete after 0s [id=c5e322f5056934beb8563a70a29ee221c658c658] 2026-02-09 02:24:36.984035 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=73274e1b-5fc6-418f-9bc0-95fc2f0a355e] 2026-02-09 02:24:44.849167 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-09 02:24:44.857481 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-09 02:24:44.863848 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-09 02:24:44.876104 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-09 02:24:44.876238 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-09 02:24:44.888434 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-09 02:24:54.850241 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-09 02:24:54.857680 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-09 02:24:54.864981 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-09 02:24:54.876466 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-09 02:24:54.876591 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-09 02:24:54.888877 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-09 02:24:55.364564 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=183305de-ed95-4dd8-ba35-93bbb2ecc57c] 2026-02-09 02:24:55.377506 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=4abe6ebe-a50c-497d-bbb1-33a31c413b33] 2026-02-09 02:24:55.387085 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=a1eaaace-e367-4d65-b946-175c8d6da32b] 2026-02-09 02:24:55.456653 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=372b24f8-597a-4dae-9bd8-8cbdf9150620] 2026-02-09 02:25:04.865672 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-09 02:25:04.890129 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-09 02:25:05.649434 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=6181e2a6-b60b-4624-a353-eee305a6d173] 2026-02-09 02:25:05.947559 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=c7b15924-4496-4ec6-8f1d-5d5c541d0ae5] 2026-02-09 02:25:05.956819 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-09 02:25:05.971039 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7382930964077813776] 2026-02-09 02:25:05.978568 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-09 02:25:05.980554 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-09 02:25:05.981822 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-09 02:25:05.987338 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-09 02:25:05.987823 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-09 02:25:05.994056 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-09 02:25:06.000593 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-09 02:25:06.006005 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-09 02:25:06.020502 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-09 02:25:06.020633 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-09 02:25:09.352080 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=4abe6ebe-a50c-497d-bbb1-33a31c413b33/96ef4066-b91b-4665-8e67-19d3f9b9c2aa] 2026-02-09 02:25:09.353611 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=183305de-ed95-4dd8-ba35-93bbb2ecc57c/1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0] 2026-02-09 02:25:09.376170 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=4abe6ebe-a50c-497d-bbb1-33a31c413b33/04e8f271-95dc-41c9-84a5-801ade107da4] 2026-02-09 02:25:09.383769 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=a1eaaace-e367-4d65-b946-175c8d6da32b/accd83ee-77ec-4f4c-88d5-19cec15f3e24] 2026-02-09 02:25:09.394780 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=183305de-ed95-4dd8-ba35-93bbb2ecc57c/1815f4db-c191-49bf-971c-f1dbc8705b46] 2026-02-09 02:25:09.412379 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=a1eaaace-e367-4d65-b946-175c8d6da32b/aca63f30-83ce-4e61-8910-3b8ba5d1369c] 2026-02-09 02:25:15.483629 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=4abe6ebe-a50c-497d-bbb1-33a31c413b33/e6e78f5c-a05f-4a2f-8630-adfade66484d] 2026-02-09 02:25:15.486390 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=183305de-ed95-4dd8-ba35-93bbb2ecc57c/ad4d2000-db3f-4cfd-be49-267ba7004717] 2026-02-09 02:25:15.511714 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=a1eaaace-e367-4d65-b946-175c8d6da32b/31e706da-f17a-4e24-9ea1-628640491509] 2026-02-09 02:25:16.022504 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-09 02:25:26.023409 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-09 02:25:26.425033 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=1c48f742-bfec-4a87-b19f-6a40ff8de7ce] 2026-02-09 02:25:26.439659 | orchestrator | 2026-02-09 02:25:26.439763 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-09 02:25:26.439783 | orchestrator | 2026-02-09 02:25:26.439797 | orchestrator | Outputs: 2026-02-09 02:25:26.439819 | orchestrator | 2026-02-09 02:25:26.439839 | orchestrator | manager_address = 2026-02-09 02:25:26.439848 | orchestrator | private_key = 2026-02-09 02:25:26.524556 | orchestrator | ok: Runtime: 0:01:08.027766 2026-02-09 02:25:26.546814 | 2026-02-09 02:25:26.546951 | TASK [Fetch manager address] 2026-02-09 02:25:27.020861 | orchestrator | ok 2026-02-09 02:25:27.031600 | 2026-02-09 02:25:27.031741 | TASK [Set manager_host address] 2026-02-09 02:25:27.110929 | orchestrator | ok 2026-02-09 02:25:27.121187 | 2026-02-09 02:25:27.121333 | LOOP [Update ansible collections] 2026-02-09 02:25:28.379570 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-09 02:25:28.379921 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-09 02:25:28.379978 | orchestrator | Starting galaxy collection install process 2026-02-09 02:25:28.381001 | orchestrator | Process install dependency map 2026-02-09 02:25:28.381069 | orchestrator | Starting collection install process 2026-02-09 02:25:28.381103 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-09 02:25:28.381139 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-09 02:25:28.381174 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-09 02:25:28.381250 | orchestrator | ok: Item: commons Runtime: 0:00:00.879328 2026-02-09 02:25:29.667523 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-09 02:25:29.667684 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-09 02:25:29.667735 | orchestrator | Starting galaxy collection install process 2026-02-09 02:25:29.667776 | orchestrator | Process install dependency map 2026-02-09 02:25:29.667813 | orchestrator | Starting collection install process 2026-02-09 02:25:29.667847 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-09 02:25:29.667880 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-09 02:25:29.667912 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-09 02:25:29.667958 | orchestrator | ok: Item: services Runtime: 0:00:00.987545 2026-02-09 02:25:29.691217 | 2026-02-09 02:25:29.691440 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-09 02:25:40.301595 | orchestrator | ok 2026-02-09 02:25:40.311976 | 2026-02-09 02:25:40.312089 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-09 02:26:40.357277 | orchestrator | ok 2026-02-09 02:26:40.368029 | 2026-02-09 02:26:40.368164 | TASK [Fetch manager ssh hostkey] 2026-02-09 02:26:41.945484 | orchestrator | Output suppressed because no_log was given 2026-02-09 02:26:41.959814 | 2026-02-09 02:26:41.959979 | TASK [Get ssh keypair from terraform environment] 2026-02-09 02:26:42.497431 | orchestrator | ok: Runtime: 0:00:00.007771 2026-02-09 02:26:42.513795 | 2026-02-09 02:26:42.513954 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-09 02:26:42.548714 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-09 02:26:42.557309 | 2026-02-09 02:26:42.557442 | TASK [Run manager part 0] 2026-02-09 02:26:43.620806 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-09 02:26:43.699175 | orchestrator | 2026-02-09 02:26:43.699219 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-09 02:26:43.699226 | orchestrator | 2026-02-09 02:26:43.699242 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-09 02:26:45.693456 | orchestrator | ok: [testbed-manager] 2026-02-09 02:26:45.693496 | orchestrator | 2026-02-09 02:26:45.693516 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-09 02:26:45.693525 | orchestrator | 2026-02-09 02:26:45.693534 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-09 02:26:47.790549 | orchestrator | ok: [testbed-manager] 2026-02-09 02:26:47.790606 | orchestrator | 2026-02-09 02:26:47.790618 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-09 02:26:48.449724 | orchestrator | ok: [testbed-manager] 2026-02-09 02:26:48.449798 | orchestrator | 2026-02-09 02:26:48.449811 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-09 02:26:48.492262 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:26:48.492313 | orchestrator | 2026-02-09 02:26:48.492326 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-09 02:26:48.519940 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:26:48.519983 | orchestrator | 2026-02-09 02:26:48.519991 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-09 02:26:48.552751 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:26:48.552796 | orchestrator | 2026-02-09 02:26:48.552803 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-09 02:26:48.585859 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:26:48.585914 | orchestrator | 2026-02-09 02:26:48.585925 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-09 02:26:48.616171 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:26:48.616217 | orchestrator | 2026-02-09 02:26:48.616230 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-09 02:26:48.646384 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:26:48.646436 | orchestrator | 2026-02-09 02:26:48.646448 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-09 02:26:48.680159 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:26:48.680219 | orchestrator | 2026-02-09 02:26:48.680234 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-09 02:26:49.411191 | orchestrator | changed: [testbed-manager] 2026-02-09 02:26:49.411236 | orchestrator | 2026-02-09 02:26:49.411245 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-09 02:29:40.687320 | orchestrator | changed: [testbed-manager] 2026-02-09 02:29:40.687378 | orchestrator | 2026-02-09 02:29:40.687390 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-09 02:31:06.437270 | orchestrator | changed: [testbed-manager] 2026-02-09 02:31:06.437312 | orchestrator | 2026-02-09 02:31:06.437320 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-09 02:31:29.729083 | orchestrator | changed: [testbed-manager] 2026-02-09 02:31:29.729164 | orchestrator | 2026-02-09 02:31:29.729177 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-09 02:31:39.100117 | orchestrator | changed: [testbed-manager] 2026-02-09 02:31:39.100168 | orchestrator | 2026-02-09 02:31:39.100176 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-09 02:31:39.147205 | orchestrator | ok: [testbed-manager] 2026-02-09 02:31:39.147274 | orchestrator | 2026-02-09 02:31:39.147286 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-09 02:31:39.993130 | orchestrator | ok: [testbed-manager] 2026-02-09 02:31:39.993227 | orchestrator | 2026-02-09 02:31:39.993246 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-09 02:31:40.715230 | orchestrator | changed: [testbed-manager] 2026-02-09 02:31:40.715315 | orchestrator | 2026-02-09 02:31:40.715332 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-09 02:31:47.283108 | orchestrator | changed: [testbed-manager] 2026-02-09 02:31:47.283192 | orchestrator | 2026-02-09 02:31:47.283230 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-09 02:31:53.531144 | orchestrator | changed: [testbed-manager] 2026-02-09 02:31:53.531238 | orchestrator | 2026-02-09 02:31:53.531257 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-09 02:31:56.321240 | orchestrator | changed: [testbed-manager] 2026-02-09 02:31:56.321299 | orchestrator | 2026-02-09 02:31:56.321308 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-09 02:31:58.058215 | orchestrator | changed: [testbed-manager] 2026-02-09 02:31:58.058314 | orchestrator | 2026-02-09 02:31:58.058328 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-09 02:31:59.230640 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-09 02:31:59.230727 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-09 02:31:59.230738 | orchestrator | 2026-02-09 02:31:59.230746 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-09 02:31:59.320275 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-09 02:31:59.320391 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-09 02:31:59.320410 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-09 02:31:59.320423 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-09 02:32:03.780007 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-09 02:32:03.780051 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-09 02:32:03.780059 | orchestrator | 2026-02-09 02:32:03.780067 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-09 02:32:04.353581 | orchestrator | changed: [testbed-manager] 2026-02-09 02:32:04.353623 | orchestrator | 2026-02-09 02:32:04.353630 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-09 02:34:23.982362 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-09 02:34:23.982465 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-09 02:34:23.982481 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-09 02:34:23.982491 | orchestrator | 2026-02-09 02:34:23.982502 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-09 02:34:26.377998 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-09 02:34:26.378188 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-09 02:34:26.378216 | orchestrator | 2026-02-09 02:34:26.378238 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-09 02:34:26.378259 | orchestrator | 2026-02-09 02:34:26.378276 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-09 02:34:27.803904 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:27.804003 | orchestrator | 2026-02-09 02:34:27.804070 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-09 02:34:27.847094 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:27.847198 | orchestrator | 2026-02-09 02:34:27.847213 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-09 02:34:27.945913 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:27.946005 | orchestrator | 2026-02-09 02:34:27.946072 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-09 02:34:28.767395 | orchestrator | changed: [testbed-manager] 2026-02-09 02:34:28.767492 | orchestrator | 2026-02-09 02:34:28.767516 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-09 02:34:29.536003 | orchestrator | changed: [testbed-manager] 2026-02-09 02:34:29.536186 | orchestrator | 2026-02-09 02:34:29.536205 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-09 02:34:30.909625 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-09 02:34:30.909710 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-09 02:34:30.909720 | orchestrator | 2026-02-09 02:34:30.909748 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-09 02:34:32.356896 | orchestrator | changed: [testbed-manager] 2026-02-09 02:34:32.356992 | orchestrator | 2026-02-09 02:34:32.357005 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-09 02:34:34.128383 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-09 02:34:34.128440 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-09 02:34:34.128453 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-09 02:34:34.128464 | orchestrator | 2026-02-09 02:34:34.128477 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-09 02:34:34.186813 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:34:34.186859 | orchestrator | 2026-02-09 02:34:34.186866 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-09 02:34:34.253777 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:34:34.253821 | orchestrator | 2026-02-09 02:34:34.253831 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-09 02:34:34.789007 | orchestrator | changed: [testbed-manager] 2026-02-09 02:34:34.789084 | orchestrator | 2026-02-09 02:34:34.789092 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-09 02:34:34.852484 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:34:34.852530 | orchestrator | 2026-02-09 02:34:34.852538 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-09 02:34:35.651389 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-09 02:34:35.651464 | orchestrator | changed: [testbed-manager] 2026-02-09 02:34:35.651474 | orchestrator | 2026-02-09 02:34:35.651482 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-09 02:34:35.679866 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:34:35.679955 | orchestrator | 2026-02-09 02:34:35.679967 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-09 02:34:35.706273 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:34:35.706347 | orchestrator | 2026-02-09 02:34:35.706357 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-09 02:34:35.733004 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:34:35.733119 | orchestrator | 2026-02-09 02:34:35.733136 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-09 02:34:35.800664 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:34:35.800759 | orchestrator | 2026-02-09 02:34:35.800771 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-09 02:34:36.505851 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:36.505940 | orchestrator | 2026-02-09 02:34:36.505955 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-09 02:34:36.505967 | orchestrator | 2026-02-09 02:34:36.505978 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-09 02:34:37.914279 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:37.914397 | orchestrator | 2026-02-09 02:34:37.914420 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-09 02:34:38.883309 | orchestrator | changed: [testbed-manager] 2026-02-09 02:34:38.883391 | orchestrator | 2026-02-09 02:34:38.883404 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:34:38.883414 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-09 02:34:38.883423 | orchestrator | 2026-02-09 02:34:39.376605 | orchestrator | ok: Runtime: 0:07:56.088567 2026-02-09 02:34:39.394498 | 2026-02-09 02:34:39.394638 | TASK [Point out that the log in on the manager is now possible] 2026-02-09 02:34:39.431832 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-09 02:34:39.441051 | 2026-02-09 02:34:39.441166 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-09 02:34:39.475223 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-09 02:34:39.484253 | 2026-02-09 02:34:39.484376 | TASK [Run manager part 1 + 2] 2026-02-09 02:34:40.442736 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-09 02:34:40.514997 | orchestrator | 2026-02-09 02:34:40.515043 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-09 02:34:40.515079 | orchestrator | 2026-02-09 02:34:40.515094 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-09 02:34:43.494802 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:43.494850 | orchestrator | 2026-02-09 02:34:43.494876 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-09 02:34:43.534968 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:34:43.535007 | orchestrator | 2026-02-09 02:34:43.535016 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-09 02:34:43.582837 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:43.582871 | orchestrator | 2026-02-09 02:34:43.582878 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-09 02:34:43.625248 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:43.625286 | orchestrator | 2026-02-09 02:34:43.625293 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-09 02:34:43.699174 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:43.699220 | orchestrator | 2026-02-09 02:34:43.699227 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-09 02:34:43.769930 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:43.769969 | orchestrator | 2026-02-09 02:34:43.769976 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-09 02:34:43.819769 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-09 02:34:43.819810 | orchestrator | 2026-02-09 02:34:43.819816 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-09 02:34:44.596932 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:44.596981 | orchestrator | 2026-02-09 02:34:44.596989 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-09 02:34:44.638880 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:34:44.638921 | orchestrator | 2026-02-09 02:34:44.638926 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-09 02:34:46.115759 | orchestrator | changed: [testbed-manager] 2026-02-09 02:34:46.115823 | orchestrator | 2026-02-09 02:34:46.115834 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-09 02:34:46.656990 | orchestrator | ok: [testbed-manager] 2026-02-09 02:34:46.657036 | orchestrator | 2026-02-09 02:34:46.657041 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-09 02:34:47.779249 | orchestrator | changed: [testbed-manager] 2026-02-09 02:34:47.779293 | orchestrator | 2026-02-09 02:34:47.779301 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-09 02:35:04.053388 | orchestrator | changed: [testbed-manager] 2026-02-09 02:35:04.053481 | orchestrator | 2026-02-09 02:35:04.053497 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-09 02:35:04.777463 | orchestrator | ok: [testbed-manager] 2026-02-09 02:35:04.777786 | orchestrator | 2026-02-09 02:35:04.777821 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-09 02:35:04.826064 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:35:04.826138 | orchestrator | 2026-02-09 02:35:04.826145 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-09 02:35:05.765553 | orchestrator | changed: [testbed-manager] 2026-02-09 02:35:05.765684 | orchestrator | 2026-02-09 02:35:05.765713 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-09 02:35:06.745626 | orchestrator | changed: [testbed-manager] 2026-02-09 02:35:06.745707 | orchestrator | 2026-02-09 02:35:06.745720 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-09 02:35:07.346662 | orchestrator | changed: [testbed-manager] 2026-02-09 02:35:07.346729 | orchestrator | 2026-02-09 02:35:07.346742 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-09 02:35:07.388113 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-09 02:35:07.388229 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-09 02:35:07.388253 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-09 02:35:07.388272 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-09 02:35:09.481541 | orchestrator | changed: [testbed-manager] 2026-02-09 02:35:09.481626 | orchestrator | 2026-02-09 02:35:09.481639 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-09 02:35:18.675216 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-09 02:35:18.675333 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-09 02:35:18.675356 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-09 02:35:18.675380 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-09 02:35:18.675405 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-09 02:35:18.675418 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-09 02:35:18.675430 | orchestrator | 2026-02-09 02:35:18.675445 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-09 02:35:19.739830 | orchestrator | changed: [testbed-manager] 2026-02-09 02:35:19.739872 | orchestrator | 2026-02-09 02:35:19.739879 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-09 02:35:19.783234 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:35:19.783275 | orchestrator | 2026-02-09 02:35:19.783282 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-09 02:35:22.907627 | orchestrator | changed: [testbed-manager] 2026-02-09 02:35:22.907700 | orchestrator | 2026-02-09 02:35:22.907710 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-09 02:35:22.945698 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:35:22.945793 | orchestrator | 2026-02-09 02:35:22.945808 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-09 02:37:04.464632 | orchestrator | changed: [testbed-manager] 2026-02-09 02:37:04.464675 | orchestrator | 2026-02-09 02:37:04.464684 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-09 02:37:05.696076 | orchestrator | ok: [testbed-manager] 2026-02-09 02:37:05.696174 | orchestrator | 2026-02-09 02:37:05.696196 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:37:05.696210 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-09 02:37:05.696222 | orchestrator | 2026-02-09 02:37:06.108941 | orchestrator | ok: Runtime: 0:02:26.012154 2026-02-09 02:37:06.126445 | 2026-02-09 02:37:06.126618 | TASK [Reboot manager] 2026-02-09 02:37:07.662146 | orchestrator | ok: Runtime: 0:00:01.033326 2026-02-09 02:37:07.679137 | 2026-02-09 02:37:07.679281 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-09 02:37:22.596434 | orchestrator | ok 2026-02-09 02:37:22.607172 | 2026-02-09 02:37:22.607296 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-09 02:38:22.649904 | orchestrator | ok 2026-02-09 02:38:22.661133 | 2026-02-09 02:38:22.661273 | TASK [Deploy manager + bootstrap nodes] 2026-02-09 02:38:25.434222 | orchestrator | 2026-02-09 02:38:25.434550 | orchestrator | # DEPLOY MANAGER 2026-02-09 02:38:25.435221 | orchestrator | 2026-02-09 02:38:25.435256 | orchestrator | + set -e 2026-02-09 02:38:25.435279 | orchestrator | + echo 2026-02-09 02:38:25.435302 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-09 02:38:25.435330 | orchestrator | + echo 2026-02-09 02:38:25.435440 | orchestrator | + cat /opt/manager-vars.sh 2026-02-09 02:38:25.437778 | orchestrator | export NUMBER_OF_NODES=6 2026-02-09 02:38:25.437827 | orchestrator | 2026-02-09 02:38:25.437834 | orchestrator | export CEPH_VERSION=reef 2026-02-09 02:38:25.437843 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-09 02:38:25.437850 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-09 02:38:25.437866 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-09 02:38:25.437872 | orchestrator | 2026-02-09 02:38:25.437882 | orchestrator | export ARA=false 2026-02-09 02:38:25.437889 | orchestrator | export DEPLOY_MODE=manager 2026-02-09 02:38:25.437899 | orchestrator | export TEMPEST=false 2026-02-09 02:38:25.437906 | orchestrator | export IS_ZUUL=true 2026-02-09 02:38:25.437913 | orchestrator | 2026-02-09 02:38:25.437924 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 02:38:25.437930 | orchestrator | export EXTERNAL_API=false 2026-02-09 02:38:25.437934 | orchestrator | 2026-02-09 02:38:25.437938 | orchestrator | export IMAGE_USER=ubuntu 2026-02-09 02:38:25.437945 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-09 02:38:25.437949 | orchestrator | 2026-02-09 02:38:25.437953 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-09 02:38:25.438130 | orchestrator | 2026-02-09 02:38:25.438139 | orchestrator | + echo 2026-02-09 02:38:25.438147 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 02:38:25.439183 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 02:38:25.439232 | orchestrator | ++ INTERACTIVE=false 2026-02-09 02:38:25.439241 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 02:38:25.439250 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 02:38:25.439256 | orchestrator | + source /opt/manager-vars.sh 2026-02-09 02:38:25.439262 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-09 02:38:25.439268 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-09 02:38:25.439273 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-09 02:38:25.439279 | orchestrator | ++ CEPH_VERSION=reef 2026-02-09 02:38:25.439285 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-09 02:38:25.439291 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-09 02:38:25.439297 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 02:38:25.439303 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 02:38:25.439308 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-09 02:38:25.439324 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-09 02:38:25.439330 | orchestrator | ++ export ARA=false 2026-02-09 02:38:25.439336 | orchestrator | ++ ARA=false 2026-02-09 02:38:25.439341 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-09 02:38:25.439347 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-09 02:38:25.439352 | orchestrator | ++ export TEMPEST=false 2026-02-09 02:38:25.439358 | orchestrator | ++ TEMPEST=false 2026-02-09 02:38:25.439364 | orchestrator | ++ export IS_ZUUL=true 2026-02-09 02:38:25.439395 | orchestrator | ++ IS_ZUUL=true 2026-02-09 02:38:25.439401 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 02:38:25.439414 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 02:38:25.439419 | orchestrator | ++ export EXTERNAL_API=false 2026-02-09 02:38:25.439425 | orchestrator | ++ EXTERNAL_API=false 2026-02-09 02:38:25.439430 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-09 02:38:25.439436 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-09 02:38:25.439441 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-09 02:38:25.439447 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-09 02:38:25.439453 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-09 02:38:25.439458 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-09 02:38:25.439464 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-09 02:38:25.495743 | orchestrator | + docker version 2026-02-09 02:38:25.587958 | orchestrator | Client: Docker Engine - Community 2026-02-09 02:38:25.588046 | orchestrator | Version: 27.5.1 2026-02-09 02:38:25.588057 | orchestrator | API version: 1.47 2026-02-09 02:38:25.588064 | orchestrator | Go version: go1.22.11 2026-02-09 02:38:25.588069 | orchestrator | Git commit: 9f9e405 2026-02-09 02:38:25.588075 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-09 02:38:25.588082 | orchestrator | OS/Arch: linux/amd64 2026-02-09 02:38:25.588088 | orchestrator | Context: default 2026-02-09 02:38:25.588094 | orchestrator | 2026-02-09 02:38:25.588100 | orchestrator | Server: Docker Engine - Community 2026-02-09 02:38:25.588106 | orchestrator | Engine: 2026-02-09 02:38:25.588112 | orchestrator | Version: 27.5.1 2026-02-09 02:38:25.588118 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-09 02:38:25.588146 | orchestrator | Go version: go1.22.11 2026-02-09 02:38:25.588152 | orchestrator | Git commit: 4c9b3b0 2026-02-09 02:38:25.588157 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-09 02:38:25.588163 | orchestrator | OS/Arch: linux/amd64 2026-02-09 02:38:25.588169 | orchestrator | Experimental: false 2026-02-09 02:38:25.588174 | orchestrator | containerd: 2026-02-09 02:38:25.588180 | orchestrator | Version: v2.2.1 2026-02-09 02:38:25.588186 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-09 02:38:25.588192 | orchestrator | runc: 2026-02-09 02:38:25.588197 | orchestrator | Version: 1.3.4 2026-02-09 02:38:25.588203 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-09 02:38:25.588209 | orchestrator | docker-init: 2026-02-09 02:38:25.588223 | orchestrator | Version: 0.19.0 2026-02-09 02:38:25.588230 | orchestrator | GitCommit: de40ad0 2026-02-09 02:38:25.591640 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-09 02:38:25.601108 | orchestrator | + set -e 2026-02-09 02:38:25.601189 | orchestrator | + source /opt/manager-vars.sh 2026-02-09 02:38:25.601198 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-09 02:38:25.601206 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-09 02:38:25.601214 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-09 02:38:25.601221 | orchestrator | ++ CEPH_VERSION=reef 2026-02-09 02:38:25.601229 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-09 02:38:25.601238 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-09 02:38:25.601245 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 02:38:25.601253 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 02:38:25.601261 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-09 02:38:25.601269 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-09 02:38:25.601276 | orchestrator | ++ export ARA=false 2026-02-09 02:38:25.601285 | orchestrator | ++ ARA=false 2026-02-09 02:38:25.601292 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-09 02:38:25.601300 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-09 02:38:25.601307 | orchestrator | ++ export TEMPEST=false 2026-02-09 02:38:25.601314 | orchestrator | ++ TEMPEST=false 2026-02-09 02:38:25.601322 | orchestrator | ++ export IS_ZUUL=true 2026-02-09 02:38:25.601330 | orchestrator | ++ IS_ZUUL=true 2026-02-09 02:38:25.601337 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 02:38:25.601345 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 02:38:25.601353 | orchestrator | ++ export EXTERNAL_API=false 2026-02-09 02:38:25.601360 | orchestrator | ++ EXTERNAL_API=false 2026-02-09 02:38:25.601393 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-09 02:38:25.601401 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-09 02:38:25.601408 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-09 02:38:25.601416 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-09 02:38:25.601423 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-09 02:38:25.601430 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-09 02:38:25.601437 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 02:38:25.601444 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 02:38:25.601451 | orchestrator | ++ INTERACTIVE=false 2026-02-09 02:38:25.601458 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 02:38:25.601470 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 02:38:25.601485 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-09 02:38:25.601493 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-09 02:38:25.609128 | orchestrator | + set -e 2026-02-09 02:38:25.609209 | orchestrator | + VERSION=9.5.0 2026-02-09 02:38:25.609227 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-09 02:38:25.616790 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-09 02:38:25.616856 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-09 02:38:25.622000 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-09 02:38:25.626788 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-09 02:38:25.635913 | orchestrator | /opt/configuration ~ 2026-02-09 02:38:25.636001 | orchestrator | + set -e 2026-02-09 02:38:25.636017 | orchestrator | + pushd /opt/configuration 2026-02-09 02:38:25.636030 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-09 02:38:25.637825 | orchestrator | + source /opt/venv/bin/activate 2026-02-09 02:38:25.639180 | orchestrator | ++ deactivate nondestructive 2026-02-09 02:38:25.639253 | orchestrator | ++ '[' -n '' ']' 2026-02-09 02:38:25.639272 | orchestrator | ++ '[' -n '' ']' 2026-02-09 02:38:25.639331 | orchestrator | ++ hash -r 2026-02-09 02:38:25.639346 | orchestrator | ++ '[' -n '' ']' 2026-02-09 02:38:25.639358 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-09 02:38:25.639417 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-09 02:38:25.639442 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-09 02:38:25.639455 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-09 02:38:25.639467 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-09 02:38:25.639479 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-09 02:38:25.639491 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-09 02:38:25.639501 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 02:38:25.639508 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 02:38:25.639515 | orchestrator | ++ export PATH 2026-02-09 02:38:25.639523 | orchestrator | ++ '[' -n '' ']' 2026-02-09 02:38:25.639533 | orchestrator | ++ '[' -z '' ']' 2026-02-09 02:38:25.639540 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-09 02:38:25.639596 | orchestrator | ++ PS1='(venv) ' 2026-02-09 02:38:25.639605 | orchestrator | ++ export PS1 2026-02-09 02:38:25.639612 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-09 02:38:25.639619 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-09 02:38:25.639626 | orchestrator | ++ hash -r 2026-02-09 02:38:25.639636 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-09 02:38:26.915246 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-09 02:38:26.916817 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-09 02:38:26.918367 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-09 02:38:26.919865 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-09 02:38:26.921340 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-09 02:38:26.931969 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-09 02:38:26.933596 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-09 02:38:26.934643 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-09 02:38:26.936215 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-09 02:38:26.969803 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-09 02:38:26.971413 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-09 02:38:26.973203 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-09 02:38:26.974714 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-09 02:38:26.979334 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-09 02:38:27.187777 | orchestrator | ++ which gilt 2026-02-09 02:38:27.189988 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-09 02:38:27.190078 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-09 02:38:27.480550 | orchestrator | osism.cfg-generics: 2026-02-09 02:38:27.615184 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-09 02:38:27.615332 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-09 02:38:27.615367 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-09 02:38:27.615426 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-09 02:38:28.343690 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-09 02:38:28.350517 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-09 02:38:28.781758 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-09 02:38:28.828142 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-09 02:38:28.828249 | orchestrator | + deactivate 2026-02-09 02:38:28.828267 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-09 02:38:28.828294 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 02:38:28.828306 | orchestrator | + export PATH 2026-02-09 02:38:28.828320 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-09 02:38:28.828333 | orchestrator | ~ 2026-02-09 02:38:28.828344 | orchestrator | + '[' -n '' ']' 2026-02-09 02:38:28.828359 | orchestrator | + hash -r 2026-02-09 02:38:28.828366 | orchestrator | + '[' -n '' ']' 2026-02-09 02:38:28.828397 | orchestrator | + unset VIRTUAL_ENV 2026-02-09 02:38:28.828405 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-09 02:38:28.828412 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-09 02:38:28.828419 | orchestrator | + unset -f deactivate 2026-02-09 02:38:28.828426 | orchestrator | + popd 2026-02-09 02:38:28.829498 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-09 02:38:28.829515 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-09 02:38:28.830202 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-09 02:38:28.884902 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-09 02:38:28.885022 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-09 02:38:28.885595 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-09 02:38:28.940425 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-09 02:38:28.941096 | orchestrator | ++ semver 2024.2 2025.1 2026-02-09 02:38:29.000695 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-09 02:38:29.000835 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-09 02:38:29.101032 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-09 02:38:29.101191 | orchestrator | + source /opt/venv/bin/activate 2026-02-09 02:38:29.101249 | orchestrator | ++ deactivate nondestructive 2026-02-09 02:38:29.101268 | orchestrator | ++ '[' -n '' ']' 2026-02-09 02:38:29.101287 | orchestrator | ++ '[' -n '' ']' 2026-02-09 02:38:29.101304 | orchestrator | ++ hash -r 2026-02-09 02:38:29.101320 | orchestrator | ++ '[' -n '' ']' 2026-02-09 02:38:29.101354 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-09 02:38:29.101470 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-09 02:38:29.101507 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-09 02:38:29.101531 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-09 02:38:29.101557 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-09 02:38:29.101572 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-09 02:38:29.101591 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-09 02:38:29.101649 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 02:38:29.101696 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 02:38:29.101714 | orchestrator | ++ export PATH 2026-02-09 02:38:29.101731 | orchestrator | ++ '[' -n '' ']' 2026-02-09 02:38:29.101748 | orchestrator | ++ '[' -z '' ']' 2026-02-09 02:38:29.101766 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-09 02:38:29.101785 | orchestrator | ++ PS1='(venv) ' 2026-02-09 02:38:29.101803 | orchestrator | ++ export PS1 2026-02-09 02:38:29.101821 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-09 02:38:29.101837 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-09 02:38:29.101855 | orchestrator | ++ hash -r 2026-02-09 02:38:29.101879 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-09 02:38:30.291897 | orchestrator | 2026-02-09 02:38:30.292001 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-09 02:38:30.292011 | orchestrator | 2026-02-09 02:38:30.292017 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-09 02:38:30.888195 | orchestrator | ok: [testbed-manager] 2026-02-09 02:38:30.888300 | orchestrator | 2026-02-09 02:38:30.888315 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-09 02:38:31.897868 | orchestrator | changed: [testbed-manager] 2026-02-09 02:38:31.897974 | orchestrator | 2026-02-09 02:38:31.897988 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-09 02:38:31.898075 | orchestrator | 2026-02-09 02:38:31.898087 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-09 02:38:35.233442 | orchestrator | ok: [testbed-manager] 2026-02-09 02:38:35.233561 | orchestrator | 2026-02-09 02:38:35.233582 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-09 02:38:35.288158 | orchestrator | ok: [testbed-manager] 2026-02-09 02:38:35.288232 | orchestrator | 2026-02-09 02:38:35.288240 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-09 02:38:35.761091 | orchestrator | changed: [testbed-manager] 2026-02-09 02:38:35.761198 | orchestrator | 2026-02-09 02:38:35.761219 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-09 02:38:35.807506 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:38:35.807663 | orchestrator | 2026-02-09 02:38:35.807680 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-09 02:38:36.188766 | orchestrator | changed: [testbed-manager] 2026-02-09 02:38:36.188851 | orchestrator | 2026-02-09 02:38:36.188861 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-09 02:38:36.541740 | orchestrator | ok: [testbed-manager] 2026-02-09 02:38:36.541864 | orchestrator | 2026-02-09 02:38:36.541894 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-09 02:38:36.664345 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:38:36.664514 | orchestrator | 2026-02-09 02:38:36.664532 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-09 02:38:36.664545 | orchestrator | 2026-02-09 02:38:36.664557 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-09 02:38:38.460681 | orchestrator | ok: [testbed-manager] 2026-02-09 02:38:38.460769 | orchestrator | 2026-02-09 02:38:38.460780 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-09 02:38:38.568627 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-09 02:38:38.568725 | orchestrator | 2026-02-09 02:38:38.568741 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-09 02:38:38.626249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-09 02:38:38.626332 | orchestrator | 2026-02-09 02:38:38.626343 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-09 02:38:39.764110 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-09 02:38:39.764215 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-09 02:38:39.764231 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-09 02:38:39.764244 | orchestrator | 2026-02-09 02:38:39.764259 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-09 02:38:41.606856 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-09 02:38:41.606980 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-09 02:38:41.606995 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-09 02:38:41.607006 | orchestrator | 2026-02-09 02:38:41.607061 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-09 02:38:42.346491 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-09 02:38:42.346619 | orchestrator | changed: [testbed-manager] 2026-02-09 02:38:42.346648 | orchestrator | 2026-02-09 02:38:42.346668 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-09 02:38:43.014386 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-09 02:38:43.014581 | orchestrator | changed: [testbed-manager] 2026-02-09 02:38:43.014602 | orchestrator | 2026-02-09 02:38:43.014619 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-09 02:38:43.072739 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:38:43.072837 | orchestrator | 2026-02-09 02:38:43.072853 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-09 02:38:43.435750 | orchestrator | ok: [testbed-manager] 2026-02-09 02:38:43.435846 | orchestrator | 2026-02-09 02:38:43.435862 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-09 02:38:43.519264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-09 02:38:43.519355 | orchestrator | 2026-02-09 02:38:43.519376 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-09 02:38:44.635259 | orchestrator | changed: [testbed-manager] 2026-02-09 02:38:44.635366 | orchestrator | 2026-02-09 02:38:44.635383 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-09 02:38:45.481093 | orchestrator | changed: [testbed-manager] 2026-02-09 02:38:45.481185 | orchestrator | 2026-02-09 02:38:45.481197 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-09 02:39:06.598675 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:06.598806 | orchestrator | 2026-02-09 02:39:06.598860 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-09 02:39:06.645709 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:39:06.645801 | orchestrator | 2026-02-09 02:39:06.645827 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-09 02:39:06.645833 | orchestrator | 2026-02-09 02:39:06.645837 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-09 02:39:08.473768 | orchestrator | ok: [testbed-manager] 2026-02-09 02:39:08.473878 | orchestrator | 2026-02-09 02:39:08.473897 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-09 02:39:08.612731 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-09 02:39:08.612821 | orchestrator | 2026-02-09 02:39:08.612830 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-09 02:39:08.690303 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-09 02:39:08.690373 | orchestrator | 2026-02-09 02:39:08.690380 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-09 02:39:11.215855 | orchestrator | ok: [testbed-manager] 2026-02-09 02:39:11.215961 | orchestrator | 2026-02-09 02:39:11.215979 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-09 02:39:11.265812 | orchestrator | ok: [testbed-manager] 2026-02-09 02:39:11.265910 | orchestrator | 2026-02-09 02:39:11.265925 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-09 02:39:11.418072 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-09 02:39:11.418165 | orchestrator | 2026-02-09 02:39:11.418179 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-09 02:39:14.400730 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-09 02:39:14.400811 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-09 02:39:14.400820 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-09 02:39:14.400827 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-09 02:39:14.400833 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-09 02:39:14.400840 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-09 02:39:14.400846 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-09 02:39:14.400853 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-09 02:39:14.400859 | orchestrator | 2026-02-09 02:39:14.400866 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-09 02:39:15.034679 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:15.034777 | orchestrator | 2026-02-09 02:39:15.034793 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-09 02:39:15.706903 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:15.707018 | orchestrator | 2026-02-09 02:39:15.707048 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-09 02:39:15.790309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-09 02:39:15.790410 | orchestrator | 2026-02-09 02:39:15.790429 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-09 02:39:17.066976 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-09 02:39:17.067075 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-09 02:39:17.067085 | orchestrator | 2026-02-09 02:39:17.067092 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-09 02:39:17.730169 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:17.730274 | orchestrator | 2026-02-09 02:39:17.730289 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-09 02:39:17.787586 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:39:17.787692 | orchestrator | 2026-02-09 02:39:17.787708 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-09 02:39:17.871786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-09 02:39:17.871884 | orchestrator | 2026-02-09 02:39:17.871899 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-09 02:39:18.508970 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:18.509097 | orchestrator | 2026-02-09 02:39:18.509126 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-09 02:39:18.574208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-09 02:39:18.574310 | orchestrator | 2026-02-09 02:39:18.574328 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-09 02:39:19.964189 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-09 02:39:19.964278 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-09 02:39:19.964288 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:19.964297 | orchestrator | 2026-02-09 02:39:19.964305 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-09 02:39:20.630645 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:20.630772 | orchestrator | 2026-02-09 02:39:20.630798 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-09 02:39:20.696244 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:39:20.696335 | orchestrator | 2026-02-09 02:39:20.696350 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-09 02:39:20.800110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-09 02:39:20.800212 | orchestrator | 2026-02-09 02:39:20.800230 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-09 02:39:21.362124 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:21.362248 | orchestrator | 2026-02-09 02:39:21.362279 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-09 02:39:21.786788 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:21.786889 | orchestrator | 2026-02-09 02:39:21.786906 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-09 02:39:23.054921 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-09 02:39:23.055033 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-09 02:39:23.055051 | orchestrator | 2026-02-09 02:39:23.055066 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-09 02:39:23.735891 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:23.736023 | orchestrator | 2026-02-09 02:39:23.736051 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-09 02:39:24.100913 | orchestrator | ok: [testbed-manager] 2026-02-09 02:39:24.101034 | orchestrator | 2026-02-09 02:39:24.101060 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-09 02:39:24.473787 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:24.473923 | orchestrator | 2026-02-09 02:39:24.473953 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-09 02:39:24.508252 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:39:24.508355 | orchestrator | 2026-02-09 02:39:24.508379 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-09 02:39:24.578383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-09 02:39:24.578540 | orchestrator | 2026-02-09 02:39:24.578553 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-09 02:39:24.629878 | orchestrator | ok: [testbed-manager] 2026-02-09 02:39:24.629974 | orchestrator | 2026-02-09 02:39:24.629989 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-09 02:39:26.685166 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-09 02:39:26.685278 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-09 02:39:26.685296 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-09 02:39:26.685308 | orchestrator | 2026-02-09 02:39:26.685320 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-09 02:39:27.443915 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:27.444005 | orchestrator | 2026-02-09 02:39:27.444017 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-09 02:39:28.199148 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:28.199238 | orchestrator | 2026-02-09 02:39:28.199245 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-09 02:39:28.962191 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:28.962313 | orchestrator | 2026-02-09 02:39:28.962332 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-09 02:39:29.046432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-09 02:39:29.046551 | orchestrator | 2026-02-09 02:39:29.046564 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-09 02:39:29.100201 | orchestrator | ok: [testbed-manager] 2026-02-09 02:39:29.100304 | orchestrator | 2026-02-09 02:39:29.100320 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-09 02:39:29.834798 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-09 02:39:29.834883 | orchestrator | 2026-02-09 02:39:29.834892 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-09 02:39:29.918268 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-09 02:39:29.918363 | orchestrator | 2026-02-09 02:39:29.918377 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-09 02:39:30.640581 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:30.640705 | orchestrator | 2026-02-09 02:39:30.640732 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-09 02:39:31.248850 | orchestrator | ok: [testbed-manager] 2026-02-09 02:39:31.248948 | orchestrator | 2026-02-09 02:39:31.248964 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-09 02:39:31.303698 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:39:31.303777 | orchestrator | 2026-02-09 02:39:31.303786 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-09 02:39:31.376805 | orchestrator | ok: [testbed-manager] 2026-02-09 02:39:31.376900 | orchestrator | 2026-02-09 02:39:31.376915 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-09 02:39:32.188284 | orchestrator | changed: [testbed-manager] 2026-02-09 02:39:32.188389 | orchestrator | 2026-02-09 02:39:32.188405 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-09 02:40:42.252360 | orchestrator | changed: [testbed-manager] 2026-02-09 02:40:42.252500 | orchestrator | 2026-02-09 02:40:42.252522 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-09 02:40:43.261951 | orchestrator | ok: [testbed-manager] 2026-02-09 02:40:43.262092 | orchestrator | 2026-02-09 02:40:43.262109 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-09 02:40:43.315119 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:40:43.315197 | orchestrator | 2026-02-09 02:40:43.315206 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-09 02:40:46.664025 | orchestrator | changed: [testbed-manager] 2026-02-09 02:40:46.664161 | orchestrator | 2026-02-09 02:40:46.664178 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-09 02:40:46.742633 | orchestrator | ok: [testbed-manager] 2026-02-09 02:40:46.742709 | orchestrator | 2026-02-09 02:40:46.742717 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-09 02:40:46.742724 | orchestrator | 2026-02-09 02:40:46.742729 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-09 02:40:46.902849 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:40:46.902926 | orchestrator | 2026-02-09 02:40:46.902934 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-09 02:41:46.952602 | orchestrator | Pausing for 60 seconds 2026-02-09 02:41:46.952734 | orchestrator | changed: [testbed-manager] 2026-02-09 02:41:46.952744 | orchestrator | 2026-02-09 02:41:46.952750 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-09 02:41:50.491770 | orchestrator | changed: [testbed-manager] 2026-02-09 02:41:50.491853 | orchestrator | 2026-02-09 02:41:50.491860 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-09 02:42:52.515637 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-09 02:42:52.515733 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-09 02:42:52.515790 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-09 02:42:52.515796 | orchestrator | changed: [testbed-manager] 2026-02-09 02:42:52.515803 | orchestrator | 2026-02-09 02:42:52.515808 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-09 02:43:03.401629 | orchestrator | changed: [testbed-manager] 2026-02-09 02:43:03.401739 | orchestrator | 2026-02-09 02:43:03.401826 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-09 02:43:03.482373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-09 02:43:03.482474 | orchestrator | 2026-02-09 02:43:03.482489 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-09 02:43:03.482502 | orchestrator | 2026-02-09 02:43:03.482513 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-09 02:43:03.528661 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:43:03.528756 | orchestrator | 2026-02-09 02:43:03.528823 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-09 02:43:03.621796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-09 02:43:03.621872 | orchestrator | 2026-02-09 02:43:03.621879 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-09 02:43:04.347742 | orchestrator | changed: [testbed-manager] 2026-02-09 02:43:04.347876 | orchestrator | 2026-02-09 02:43:04.347886 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-09 02:43:07.630156 | orchestrator | ok: [testbed-manager] 2026-02-09 02:43:07.630274 | orchestrator | 2026-02-09 02:43:07.630294 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-09 02:43:07.700909 | orchestrator | ok: [testbed-manager] => { 2026-02-09 02:43:07.701012 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-09 02:43:07.701030 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-09 02:43:07.701043 | orchestrator | "Checking running containers against expected versions...", 2026-02-09 02:43:07.701056 | orchestrator | "", 2026-02-09 02:43:07.701068 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-09 02:43:07.701079 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-09 02:43:07.701091 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.701103 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-09 02:43:07.701115 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.701126 | orchestrator | "", 2026-02-09 02:43:07.701137 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-09 02:43:07.701177 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-09 02:43:07.701189 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.701200 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-09 02:43:07.701211 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.701222 | orchestrator | "", 2026-02-09 02:43:07.701233 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-09 02:43:07.701244 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-09 02:43:07.701255 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.701266 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-09 02:43:07.701276 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.701287 | orchestrator | "", 2026-02-09 02:43:07.701298 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-09 02:43:07.701309 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-09 02:43:07.701320 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.701331 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-09 02:43:07.701342 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.701353 | orchestrator | "", 2026-02-09 02:43:07.701367 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-09 02:43:07.701377 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-09 02:43:07.701388 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.701402 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-09 02:43:07.701414 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.701427 | orchestrator | "", 2026-02-09 02:43:07.701440 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-09 02:43:07.701453 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.701465 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.701478 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.701491 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.701504 | orchestrator | "", 2026-02-09 02:43:07.701516 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-09 02:43:07.701529 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-09 02:43:07.701542 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.701556 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-09 02:43:07.701568 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.701582 | orchestrator | "", 2026-02-09 02:43:07.701595 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-09 02:43:07.701607 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-09 02:43:07.701618 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.701630 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-09 02:43:07.701648 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.701667 | orchestrator | "", 2026-02-09 02:43:07.701684 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-09 02:43:07.701702 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-09 02:43:07.701720 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.701738 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-09 02:43:07.701924 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.701942 | orchestrator | "", 2026-02-09 02:43:07.701953 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-09 02:43:07.701964 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-09 02:43:07.701975 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.701986 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-09 02:43:07.701997 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.702014 | orchestrator | "", 2026-02-09 02:43:07.702117 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-09 02:43:07.702152 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.702169 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.702187 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.702206 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.702226 | orchestrator | "", 2026-02-09 02:43:07.702244 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-09 02:43:07.702263 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.702275 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.702286 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.702303 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.702321 | orchestrator | "", 2026-02-09 02:43:07.702348 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-09 02:43:07.702369 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.702387 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.702404 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.702422 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.702441 | orchestrator | "", 2026-02-09 02:43:07.702459 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-09 02:43:07.702478 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.702496 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.702514 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.702565 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.702584 | orchestrator | "", 2026-02-09 02:43:07.702601 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-09 02:43:07.702618 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.702653 | orchestrator | " Enabled: true", 2026-02-09 02:43:07.702672 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-09 02:43:07.702691 | orchestrator | " Status: ✅ MATCH", 2026-02-09 02:43:07.702704 | orchestrator | "", 2026-02-09 02:43:07.702715 | orchestrator | "=== Summary ===", 2026-02-09 02:43:07.702726 | orchestrator | "Errors (version mismatches): 0", 2026-02-09 02:43:07.702737 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-09 02:43:07.702754 | orchestrator | "", 2026-02-09 02:43:07.702772 | orchestrator | "✅ All running containers match expected versions!" 2026-02-09 02:43:07.702825 | orchestrator | ] 2026-02-09 02:43:07.702846 | orchestrator | } 2026-02-09 02:43:07.702863 | orchestrator | 2026-02-09 02:43:07.702882 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-09 02:43:07.752365 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:43:07.752452 | orchestrator | 2026-02-09 02:43:07.752465 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:43:07.752476 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-09 02:43:07.752484 | orchestrator | 2026-02-09 02:43:07.826861 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-09 02:43:07.826985 | orchestrator | + deactivate 2026-02-09 02:43:07.827011 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-09 02:43:07.827032 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 02:43:07.827052 | orchestrator | + export PATH 2026-02-09 02:43:07.827073 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-09 02:43:07.827092 | orchestrator | + '[' -n '' ']' 2026-02-09 02:43:07.827112 | orchestrator | + hash -r 2026-02-09 02:43:07.827124 | orchestrator | + '[' -n '' ']' 2026-02-09 02:43:07.827139 | orchestrator | + unset VIRTUAL_ENV 2026-02-09 02:43:07.827158 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-09 02:43:07.827176 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-09 02:43:07.827194 | orchestrator | + unset -f deactivate 2026-02-09 02:43:07.827214 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-09 02:43:07.832564 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-09 02:43:07.832619 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-09 02:43:07.832645 | orchestrator | + local max_attempts=60 2026-02-09 02:43:07.832650 | orchestrator | + local name=ceph-ansible 2026-02-09 02:43:07.832654 | orchestrator | + local attempt_num=1 2026-02-09 02:43:07.833529 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 02:43:07.869484 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-09 02:43:07.869582 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-09 02:43:07.869883 | orchestrator | + local max_attempts=60 2026-02-09 02:43:07.870080 | orchestrator | + local name=kolla-ansible 2026-02-09 02:43:07.870104 | orchestrator | + local attempt_num=1 2026-02-09 02:43:07.870124 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-09 02:43:07.901646 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-09 02:43:07.901728 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-09 02:43:07.901738 | orchestrator | + local max_attempts=60 2026-02-09 02:43:07.901745 | orchestrator | + local name=osism-ansible 2026-02-09 02:43:07.901751 | orchestrator | + local attempt_num=1 2026-02-09 02:43:07.902098 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-09 02:43:07.937710 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-09 02:43:07.937818 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-09 02:43:07.937832 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-09 02:43:08.602085 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-09 02:43:08.778976 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-09 02:43:08.779229 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-09 02:43:08.779265 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-09 02:43:08.779290 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-09 02:43:08.779316 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-09 02:43:08.779375 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-09 02:43:08.779402 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-09 02:43:08.779426 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-09 02:43:08.779454 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-09 02:43:08.779473 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-09 02:43:08.779491 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-09 02:43:08.779519 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-09 02:43:08.779549 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-09 02:43:08.779633 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-09 02:43:08.779662 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-09 02:43:08.779690 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-09 02:43:08.785549 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-09 02:43:08.838347 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-09 02:43:08.838456 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-09 02:43:08.842907 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-09 02:43:20.996200 | orchestrator | 2026-02-09 02:43:20 | INFO  | Task 7b785f79-b0ed-4a52-87ff-48d933e611f9 (resolvconf) was prepared for execution. 2026-02-09 02:43:20.996280 | orchestrator | 2026-02-09 02:43:20 | INFO  | It takes a moment until task 7b785f79-b0ed-4a52-87ff-48d933e611f9 (resolvconf) has been started and output is visible here. 2026-02-09 02:43:35.652892 | orchestrator | 2026-02-09 02:43:35.653008 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-09 02:43:35.653022 | orchestrator | 2026-02-09 02:43:35.653032 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-09 02:43:35.653042 | orchestrator | Monday 09 February 2026 02:43:25 +0000 (0:00:00.149) 0:00:00.149 ******* 2026-02-09 02:43:35.653051 | orchestrator | ok: [testbed-manager] 2026-02-09 02:43:35.653061 | orchestrator | 2026-02-09 02:43:35.653070 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-09 02:43:35.653081 | orchestrator | Monday 09 February 2026 02:43:29 +0000 (0:00:04.054) 0:00:04.203 ******* 2026-02-09 02:43:35.653090 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:43:35.653100 | orchestrator | 2026-02-09 02:43:35.653109 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-09 02:43:35.653119 | orchestrator | Monday 09 February 2026 02:43:29 +0000 (0:00:00.067) 0:00:04.271 ******* 2026-02-09 02:43:35.653128 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-09 02:43:35.653138 | orchestrator | 2026-02-09 02:43:35.653148 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-09 02:43:35.653157 | orchestrator | Monday 09 February 2026 02:43:29 +0000 (0:00:00.091) 0:00:04.363 ******* 2026-02-09 02:43:35.653186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-09 02:43:35.653196 | orchestrator | 2026-02-09 02:43:35.653205 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-09 02:43:35.653215 | orchestrator | Monday 09 February 2026 02:43:29 +0000 (0:00:00.087) 0:00:04.451 ******* 2026-02-09 02:43:35.653224 | orchestrator | ok: [testbed-manager] 2026-02-09 02:43:35.653233 | orchestrator | 2026-02-09 02:43:35.653242 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-09 02:43:35.653252 | orchestrator | Monday 09 February 2026 02:43:30 +0000 (0:00:01.169) 0:00:05.621 ******* 2026-02-09 02:43:35.653261 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:43:35.653270 | orchestrator | 2026-02-09 02:43:35.653327 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-09 02:43:35.653338 | orchestrator | Monday 09 February 2026 02:43:30 +0000 (0:00:00.065) 0:00:05.686 ******* 2026-02-09 02:43:35.653375 | orchestrator | ok: [testbed-manager] 2026-02-09 02:43:35.653385 | orchestrator | 2026-02-09 02:43:35.653395 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-09 02:43:35.653405 | orchestrator | Monday 09 February 2026 02:43:31 +0000 (0:00:00.527) 0:00:06.214 ******* 2026-02-09 02:43:35.653415 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:43:35.653427 | orchestrator | 2026-02-09 02:43:35.653439 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-09 02:43:35.653451 | orchestrator | Monday 09 February 2026 02:43:31 +0000 (0:00:00.079) 0:00:06.294 ******* 2026-02-09 02:43:35.653461 | orchestrator | changed: [testbed-manager] 2026-02-09 02:43:35.653470 | orchestrator | 2026-02-09 02:43:35.653479 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-09 02:43:35.653489 | orchestrator | Monday 09 February 2026 02:43:31 +0000 (0:00:00.570) 0:00:06.865 ******* 2026-02-09 02:43:35.653498 | orchestrator | changed: [testbed-manager] 2026-02-09 02:43:35.653507 | orchestrator | 2026-02-09 02:43:35.653517 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-09 02:43:35.653526 | orchestrator | Monday 09 February 2026 02:43:33 +0000 (0:00:01.103) 0:00:07.968 ******* 2026-02-09 02:43:35.653536 | orchestrator | ok: [testbed-manager] 2026-02-09 02:43:35.653548 | orchestrator | 2026-02-09 02:43:35.653557 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-09 02:43:35.653567 | orchestrator | Monday 09 February 2026 02:43:34 +0000 (0:00:01.015) 0:00:08.984 ******* 2026-02-09 02:43:35.653589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-09 02:43:35.653601 | orchestrator | 2026-02-09 02:43:35.653613 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-09 02:43:35.653624 | orchestrator | Monday 09 February 2026 02:43:34 +0000 (0:00:00.075) 0:00:09.059 ******* 2026-02-09 02:43:35.653636 | orchestrator | changed: [testbed-manager] 2026-02-09 02:43:35.653645 | orchestrator | 2026-02-09 02:43:35.653655 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:43:35.653666 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 02:43:35.653676 | orchestrator | 2026-02-09 02:43:35.653685 | orchestrator | 2026-02-09 02:43:35.653695 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:43:35.653705 | orchestrator | Monday 09 February 2026 02:43:35 +0000 (0:00:01.193) 0:00:10.253 ******* 2026-02-09 02:43:35.653714 | orchestrator | =============================================================================== 2026-02-09 02:43:35.653724 | orchestrator | Gathering Facts --------------------------------------------------------- 4.05s 2026-02-09 02:43:35.653734 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.19s 2026-02-09 02:43:35.653743 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.17s 2026-02-09 02:43:35.653753 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.10s 2026-02-09 02:43:35.653762 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.02s 2026-02-09 02:43:35.653771 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.57s 2026-02-09 02:43:35.653798 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.53s 2026-02-09 02:43:35.653808 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-09 02:43:35.653818 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-02-09 02:43:35.653847 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-09 02:43:35.653856 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-02-09 02:43:35.653865 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-09 02:43:35.653886 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-02-09 02:43:35.995413 | orchestrator | + osism apply sshconfig 2026-02-09 02:43:48.064002 | orchestrator | 2026-02-09 02:43:48 | INFO  | Task f496e871-9fd6-4781-bed7-d0a360759cd2 (sshconfig) was prepared for execution. 2026-02-09 02:43:48.064085 | orchestrator | 2026-02-09 02:43:48 | INFO  | It takes a moment until task f496e871-9fd6-4781-bed7-d0a360759cd2 (sshconfig) has been started and output is visible here. 2026-02-09 02:43:59.307533 | orchestrator | 2026-02-09 02:43:59.307814 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-09 02:43:59.307848 | orchestrator | 2026-02-09 02:43:59.307966 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-09 02:43:59.307993 | orchestrator | Monday 09 February 2026 02:43:52 +0000 (0:00:00.147) 0:00:00.147 ******* 2026-02-09 02:43:59.308012 | orchestrator | ok: [testbed-manager] 2026-02-09 02:43:59.308032 | orchestrator | 2026-02-09 02:43:59.308052 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-09 02:43:59.308071 | orchestrator | Monday 09 February 2026 02:43:52 +0000 (0:00:00.541) 0:00:00.688 ******* 2026-02-09 02:43:59.308090 | orchestrator | changed: [testbed-manager] 2026-02-09 02:43:59.308110 | orchestrator | 2026-02-09 02:43:59.308129 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-09 02:43:59.308148 | orchestrator | Monday 09 February 2026 02:43:53 +0000 (0:00:00.531) 0:00:01.220 ******* 2026-02-09 02:43:59.308165 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-09 02:43:59.308185 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-09 02:43:59.308204 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-09 02:43:59.308224 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-09 02:43:59.308243 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-09 02:43:59.308262 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-09 02:43:59.308280 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-09 02:43:59.308298 | orchestrator | 2026-02-09 02:43:59.308310 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-09 02:43:59.308321 | orchestrator | Monday 09 February 2026 02:43:58 +0000 (0:00:05.300) 0:00:06.521 ******* 2026-02-09 02:43:59.308333 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:43:59.308343 | orchestrator | 2026-02-09 02:43:59.308354 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-09 02:43:59.308365 | orchestrator | Monday 09 February 2026 02:43:58 +0000 (0:00:00.064) 0:00:06.586 ******* 2026-02-09 02:43:59.308376 | orchestrator | changed: [testbed-manager] 2026-02-09 02:43:59.308387 | orchestrator | 2026-02-09 02:43:59.308398 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:43:59.308410 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 02:43:59.308422 | orchestrator | 2026-02-09 02:43:59.308433 | orchestrator | 2026-02-09 02:43:59.308444 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:43:59.308455 | orchestrator | Monday 09 February 2026 02:43:59 +0000 (0:00:00.565) 0:00:07.151 ******* 2026-02-09 02:43:59.308466 | orchestrator | =============================================================================== 2026-02-09 02:43:59.308477 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.30s 2026-02-09 02:43:59.308488 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2026-02-09 02:43:59.308498 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.54s 2026-02-09 02:43:59.308509 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2026-02-09 02:43:59.308520 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2026-02-09 02:43:59.675327 | orchestrator | + osism apply known-hosts 2026-02-09 02:44:11.970613 | orchestrator | 2026-02-09 02:44:11 | INFO  | Task 2aa4484a-ce97-4a53-af5f-53823a3aa1ec (known-hosts) was prepared for execution. 2026-02-09 02:44:11.970726 | orchestrator | 2026-02-09 02:44:11 | INFO  | It takes a moment until task 2aa4484a-ce97-4a53-af5f-53823a3aa1ec (known-hosts) has been started and output is visible here. 2026-02-09 02:44:28.836285 | orchestrator | 2026-02-09 02:44:28.836380 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-09 02:44:28.836391 | orchestrator | 2026-02-09 02:44:28.836399 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-09 02:44:28.836411 | orchestrator | Monday 09 February 2026 02:44:16 +0000 (0:00:00.179) 0:00:00.179 ******* 2026-02-09 02:44:28.836429 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-09 02:44:28.836440 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-09 02:44:28.836450 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-09 02:44:28.836462 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-09 02:44:28.836471 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-09 02:44:28.836477 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-09 02:44:28.836484 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-09 02:44:28.836490 | orchestrator | 2026-02-09 02:44:28.836497 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-09 02:44:28.836504 | orchestrator | Monday 09 February 2026 02:44:22 +0000 (0:00:06.108) 0:00:06.287 ******* 2026-02-09 02:44:28.836511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-09 02:44:28.836519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-09 02:44:28.836526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-09 02:44:28.836532 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-09 02:44:28.836538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-09 02:44:28.836553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-09 02:44:28.836559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-09 02:44:28.836565 | orchestrator | 2026-02-09 02:44:28.836572 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:28.836578 | orchestrator | Monday 09 February 2026 02:44:22 +0000 (0:00:00.158) 0:00:06.446 ******* 2026-02-09 02:44:28.836585 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCYupfDJbITUypnsyVqrh2YZN5rxBmhETecR+N2aFJTv6oJqvb/YHnlZ97PdM8OlMxTkLw+xfVkKNpmOEujzZco=) 2026-02-09 02:44:28.836599 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuYso9D+f+WtM+7NNGvML/wcYzajszOdsA7acc/vdllSOLeSOsX8x3B6pGuCudXL8yMwsR0dNZYXzfILYWsvQZMWOQWaDQmYe5lr+uFUDAbZCZbSP2Jt56dgewGuVA+ii5wmN40Op6VvomcVan/DLKokYqOwAbwLK8yULCx7VsIAcRtMZA5Su1aZQKl6H6JYGhTJjf9blwqB77q3ZoNSek5xuKucJnzCXtT2buxvRjPpO5mFVZdVtU3Cl55DeP1y8LT1seEun/fh8RiIt4UvTfK2JG8N+c9CI6EsWXcknOp+2bU5IDu7WV50r2qr5qLjB4ito1s3LpAJvARBegZvdaw9wjGrNp0TmPEeO18JwI5YKat2NsQPHpHjY/rp55sVd80hP7JEGF7p8t6orv17VTVveYeaxocE2V0tvtHXXpKsunaz2HfOE5GrX84iZf+GXuhQaLnJrVOdJx8w36qIk3NKu9q14Ze30UTG2jumZkpUfOZXfjujTGZ1fDVzID07M=) 2026-02-09 02:44:28.836625 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArVjnaRZ2EE7oUfYKr8gOzlLSdntjUkBiMfJXVRaVGt) 2026-02-09 02:44:28.836633 | orchestrator | 2026-02-09 02:44:28.836640 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:28.836646 | orchestrator | Monday 09 February 2026 02:44:23 +0000 (0:00:01.070) 0:00:07.517 ******* 2026-02-09 02:44:28.836667 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy/h8d7Gt3b8aNsQh9n7tBqnyQjVgyGEUVwJJxuQT/9tns5ajSY3aN/lqc21TfxMl02jEHHkzk3aZvRaNGIFiYGEnHzg8857h3CoEMHuqpsShQLDGi0DDSM0KDO3qESLWQKVJVfLZARUUeENuQGU/9aW8XPzowPtN8ikcXzRC6R6GAd1xlxcABeHY+jCD6a+JsNzm/+mH6Naf6MPfEb7Mvja5gw7uI+CszGkWd9j5hjssCH9VwM4XeJ2W2tGoOeuqwMPqnVfpS4izpbQbNvncfdYXBvZQ60TKAC4NF7PKkaeSiCz5mOgnxvqFO1BX6Pt7/P3ZGY4bdbn57n0xSJevg8QqArvgPQ4bnrjdxt7X82TWmEgwtsa5ODOqro2byIPS5uSPmdBy+7/6ZgZ91ajpyRL04H8+6slEWwA44DIe0u0lem8K4CZ+a8EONm+RFYbmf5iMHQgnwOC6uq+LUpclsPUB85tqzsa7cNNaeou40D9XoPp5NNqJ0uGIurj22Rq8=) 2026-02-09 02:44:28.836674 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDlFpCS2JfybWww6fb8+W9zdFQxyXnNx+NziOXcV+6fwL7P9NEZGRmU1m4Jz3Ex4VQdWEMKfl/lpNXZtJx+E3r0=) 2026-02-09 02:44:28.836680 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILWuIpFyYSgZyPa6eHAXHbBC2uG/vDC6FkHlL+/2qFOk) 2026-02-09 02:44:28.836687 | orchestrator | 2026-02-09 02:44:28.836693 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:28.836699 | orchestrator | Monday 09 February 2026 02:44:24 +0000 (0:00:00.963) 0:00:08.480 ******* 2026-02-09 02:44:28.836705 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO+nIO1AMzzm6qT8LtVR+bDXtRXzsiwLNn6lLlo4FVvZ) 2026-02-09 02:44:28.836712 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/MLVXx61uf6jPS3HSJXdD+wbRkmpaW9hhJ9YVwBv5lIpPypxOigX5t/aCXAZTsl/cLIArevJkVYfbGQ4Ggjs+nrjIAXI9jw/JPOK2cBRRqy6Q0nrsIUQU92Ai2Oi4Td5CRf9Hg1M+x6OvHAWOimnZ8lyC6StV+ziXKIipuwRrpMZYtHoc2+n9guI4ztGW3yrHnvKljfTb73wKEz80rPKN8Mx1FFliaEF9B4Y9rGrYprapMpKoLNvBw/7UK2YEFDunsWCZibRCKRmi7KTzoWmN8hUFF73fz3/b+S1x9xkCZ3LhhmBKTVtLM9RdbVswbjNEgcHPSCE4qsSlWdAShpUQca2sDewWBxNhQeI1Ivny1sWc1FJ/SK4SLROJRXFQTU4WBtcnDVwflQ9N9Vn7thI9/3CBmDZMgBbfdfuOH6JSEjcfXXY0pn5v9Edl/pft2vZmWuz3XcCD5UjQ3bKSdUA2MoofIk2WkWWsWSB+lvwM9vhm+3/9/dpMrwjg05s/ucM=) 2026-02-09 02:44:28.836719 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDvUdV0ynLwlnrYYErCvAln7oPdwyKVDmY2j5NxQnePfmpa+1UP7Km86W/jDnoKT0eKxmJWWFYyLBrpFfT2AzGk=) 2026-02-09 02:44:28.836725 | orchestrator | 2026-02-09 02:44:28.836731 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:28.836738 | orchestrator | Monday 09 February 2026 02:44:25 +0000 (0:00:01.018) 0:00:09.499 ******* 2026-02-09 02:44:28.836744 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOIxwDXaegnvG4M6iNaT4Gl4e/P8kOweIs6EwxirXZpfSVTZDUJRzO4hM4/L7qD8d7bCogdjG0Gb2tGiyGd9fFs=) 2026-02-09 02:44:28.836750 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC70JHo/cwVUjbHHz7VRhaW5/brCOnA3lGVVdeXtJ5qI) 2026-02-09 02:44:28.836757 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC41uWJXJZGthZSTeMln9xfp5l1L4ZHE1yOBEKwwKUvc7r8LZDAc8QcdZsf5Am3t1P0fd+W2LieXhze0S+gV5p51scgd8l1Mq2Oi1dur2cUPFWGBsOvZxS0HQySYujo8qZSxUFNYbaEpE5Sf3A2rQPDyCpYHTXZelA52IU/GiAms6CYw2sVdec3MEQcPNw4ll80T1DqKTcGomRq/CPtZFs8NbbDBLTY7L0knpTqPvKKRFbIKf1mMygn3jWc2CMJkn2xZE1rp3Zo5a6D2ILaD12XWXQHxAg4jBPMTLd/7twegDdg+Bv/rbIK1V/rElAnJU6wRi8iRPtUSvsCEqycwcchLNXgqQRPz8wa6nITUb+m7t1jKhZ/lC6WhAPP92UHul3si7QVjl7/5Jmfcgv3U0uV+X/Xv1bcIpUg/Q56Yx2tlD/qgQ0fsbJfJHhCrKBd5Hk5ONaApaBKTv7l01erjvmV5YBQJCOUUV5hFyU8rH7xrgjx7J0oPD+ICuEXSuM+0Lk=) 2026-02-09 02:44:28.836769 | orchestrator | 2026-02-09 02:44:28.836775 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:28.836781 | orchestrator | Monday 09 February 2026 02:44:26 +0000 (0:00:01.074) 0:00:10.574 ******* 2026-02-09 02:44:28.836842 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGJkOe2oJtb3P19ynVyzkrg5IN2XlzEF1QqJPZ8rtub/GifhVCjZoR3g0IoEVEqtHSPcuvkaZwPV0mOTF7VtuBI=) 2026-02-09 02:44:28.836851 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGrVn1Jev0wYuSL+wqEp+6RPK9WucEubunO1BLzATSPk2DW0e5WSsAGWD3d13vvP0ZYK/MSnN9dCbHfhL8hukwM3nCWPUNLa5kiGDGS9kiOoA5FzO1BOTyUk/E2tZ2ZqEKmJpPMFbMEcWyBooXC9YC/8sVRtRt9KLwfXGYCSoufqVTl8+elCJamzQk+0GK14lH3rtj/AzVcKXdaYmIbb3mAXHIz4fVJtiFhUlI5oYDuwmrzxdZlYzZKm8yah6jQosBCrfWgdevUEYxno+d3INVC27Z9PuaksEEO2aeApJtSYy9BGzNqTavqiqLogx7CnzpYPAwGizKjkI4buQfPNHHFxQEz7sodrPXAiCqbKUoupt6wDsK/kjq+Ys+AibwCbYSJIDrVeT4T+gXfP437FbxgDj/gK6h5FXkMDgISiw3s4Za/jutRERkOwlPN0OPrMzycVGGSQSDaKtHUaEw6yE8y6z0ezH6vXQF4z9ULixK5kUIyLDyO1CNEej+Q+MPy1M=) 2026-02-09 02:44:28.836859 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOXWH9D20ijMN1FBTXueOC3/40DNYIrgsGx5HGtK8Ycy) 2026-02-09 02:44:28.836867 | orchestrator | 2026-02-09 02:44:28.836874 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:28.836882 | orchestrator | Monday 09 February 2026 02:44:27 +0000 (0:00:01.007) 0:00:11.581 ******* 2026-02-09 02:44:28.836895 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzlJEHaEB6DvRmEe6MkatLF7NsmDoFhLzd1/ce2/HC3v9k23qgn97ua9AZwyOeDSDPUMK/ic41PNHbsBQHxK6MXGdsa4AR8so/xjPxbYRwMH6XqI7DCbQD1XkB7dkTpmLG0MF1771AKu6PFEEeuRlo44sbJUxRNrbrdJ1htGUjxcDMUKGxUO3qfXXmwSlqZc2H/CDmcOrZ9RgKX6fUwMLB81Dt0RqxSkVwq0kby6kCArLEUQiV5thdWyuFOFQmAYZlKUD8UJmBY46r9NBkwXa6LHBEmMh2OJKhQpQN4/bsLKzMNybDt93x2iDsb1vgKOpUm3bQfrT49iyHvFb7ah3Qbjh76jCaecU+h2Pq37Y3ct7L1M8pV2Wjwwo3QgrnBBL3jtkSNC4ZTAhfLcacfzuITjqm4VIekIVlIlLXFbhw+ZDxUF21Gdl6B64gKpKObqqILpvhuSOc1FQeCAgFRfb5+uyVL8gqohfzguGQMaFOkkyrTQT5TiDgKg07ADbXc78=) 2026-02-09 02:44:40.066663 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIk4y2hEDFCHARSB0hkN9DGX0aY3sBZFjyg6uVukNSEu) 2026-02-09 02:44:40.066778 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBImXufXZLy473pSD1xhyLwVRPicZBZai/voJ87UfT0UbZMuP037MOvmji0Qshwq7i2eH+yIN1YuSnnXpZewxRnk=) 2026-02-09 02:44:40.066787 | orchestrator | 2026-02-09 02:44:40.066794 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:40.066802 | orchestrator | Monday 09 February 2026 02:44:28 +0000 (0:00:01.007) 0:00:12.589 ******* 2026-02-09 02:44:40.066808 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCW/2mDnwi9bqilgj9Ne/LRXEXPODti8TTuq3QCCFmpM0RnLfhkW4qDwwJBkz8oVSksMKfyidxL+2D4oxcAlqlW6jeErGQQ5ZCmjYGjmtlKJ/n0LTaoi8ytUYeOYrRu9IMEP9FKB1kw5C68Tb/B6xAjDISXrVUayWVAEWzDkFjqCjRwsjJ/j314l5RlvzsQqHwW34YJEqj5uLG9gPx5175EdR44N7c9gg5EpO+Ipil+XJxyAAFofQGkM+M4sRTRu09E5bmqp7oi5JWOR1uu2Ku/oosh6OF6rDvkVnOONQZwxmnAsFBM0cd/mUv/kCjwbjANiLoVL/FA96L07vpMJv55MU0aZK3/cg8RAoYLVFrX7byAUyw/ks887+NNwFcXAD1eR+8wkCkPd46A7lqDVkLW1eOlcptWCDq/JRsdxBibhXcB5PG/xKbMS7u4B/k/Q4XHkPHoARDlX3gmU6T5++bBTtsI10XZWnGIsvLunk8+lSKT6tUeK09jXn7jjSK9Z78=) 2026-02-09 02:44:40.066816 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSUAC0eMgcHulsT8CRUPfyVd+8Cf8DNWkstNY5kQAFSqTCKejvH4WPJTUij/sv/AGvCBltdA0+G/l7OYEynV2w=) 2026-02-09 02:44:40.066843 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHPfUHQCicOydVdgQh7gZbJFhpbh5b0Xq1cF57SUe8J/) 2026-02-09 02:44:40.066849 | orchestrator | 2026-02-09 02:44:40.066854 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-09 02:44:40.066861 | orchestrator | Monday 09 February 2026 02:44:29 +0000 (0:00:00.982) 0:00:13.571 ******* 2026-02-09 02:44:40.066866 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-09 02:44:40.066872 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-09 02:44:40.066877 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-09 02:44:40.066882 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-09 02:44:40.066886 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-09 02:44:40.066891 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-09 02:44:40.066896 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-09 02:44:40.066901 | orchestrator | 2026-02-09 02:44:40.066906 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-09 02:44:40.066912 | orchestrator | Monday 09 February 2026 02:44:35 +0000 (0:00:05.302) 0:00:18.874 ******* 2026-02-09 02:44:40.066918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-09 02:44:40.066962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-09 02:44:40.066968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-09 02:44:40.066973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-09 02:44:40.066978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-09 02:44:40.066983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-09 02:44:40.066987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-09 02:44:40.066992 | orchestrator | 2026-02-09 02:44:40.066997 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:40.067002 | orchestrator | Monday 09 February 2026 02:44:35 +0000 (0:00:00.201) 0:00:19.075 ******* 2026-02-09 02:44:40.067007 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArVjnaRZ2EE7oUfYKr8gOzlLSdntjUkBiMfJXVRaVGt) 2026-02-09 02:44:40.067052 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuYso9D+f+WtM+7NNGvML/wcYzajszOdsA7acc/vdllSOLeSOsX8x3B6pGuCudXL8yMwsR0dNZYXzfILYWsvQZMWOQWaDQmYe5lr+uFUDAbZCZbSP2Jt56dgewGuVA+ii5wmN40Op6VvomcVan/DLKokYqOwAbwLK8yULCx7VsIAcRtMZA5Su1aZQKl6H6JYGhTJjf9blwqB77q3ZoNSek5xuKucJnzCXtT2buxvRjPpO5mFVZdVtU3Cl55DeP1y8LT1seEun/fh8RiIt4UvTfK2JG8N+c9CI6EsWXcknOp+2bU5IDu7WV50r2qr5qLjB4ito1s3LpAJvARBegZvdaw9wjGrNp0TmPEeO18JwI5YKat2NsQPHpHjY/rp55sVd80hP7JEGF7p8t6orv17VTVveYeaxocE2V0tvtHXXpKsunaz2HfOE5GrX84iZf+GXuhQaLnJrVOdJx8w36qIk3NKu9q14Ze30UTG2jumZkpUfOZXfjujTGZ1fDVzID07M=) 2026-02-09 02:44:40.067058 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCYupfDJbITUypnsyVqrh2YZN5rxBmhETecR+N2aFJTv6oJqvb/YHnlZ97PdM8OlMxTkLw+xfVkKNpmOEujzZco=) 2026-02-09 02:44:40.067068 | orchestrator | 2026-02-09 02:44:40.067074 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:40.067079 | orchestrator | Monday 09 February 2026 02:44:36 +0000 (0:00:01.176) 0:00:20.252 ******* 2026-02-09 02:44:40.067084 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy/h8d7Gt3b8aNsQh9n7tBqnyQjVgyGEUVwJJxuQT/9tns5ajSY3aN/lqc21TfxMl02jEHHkzk3aZvRaNGIFiYGEnHzg8857h3CoEMHuqpsShQLDGi0DDSM0KDO3qESLWQKVJVfLZARUUeENuQGU/9aW8XPzowPtN8ikcXzRC6R6GAd1xlxcABeHY+jCD6a+JsNzm/+mH6Naf6MPfEb7Mvja5gw7uI+CszGkWd9j5hjssCH9VwM4XeJ2W2tGoOeuqwMPqnVfpS4izpbQbNvncfdYXBvZQ60TKAC4NF7PKkaeSiCz5mOgnxvqFO1BX6Pt7/P3ZGY4bdbn57n0xSJevg8QqArvgPQ4bnrjdxt7X82TWmEgwtsa5ODOqro2byIPS5uSPmdBy+7/6ZgZ91ajpyRL04H8+6slEWwA44DIe0u0lem8K4CZ+a8EONm+RFYbmf5iMHQgnwOC6uq+LUpclsPUB85tqzsa7cNNaeou40D9XoPp5NNqJ0uGIurj22Rq8=) 2026-02-09 02:44:40.067089 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILWuIpFyYSgZyPa6eHAXHbBC2uG/vDC6FkHlL+/2qFOk) 2026-02-09 02:44:40.067094 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDlFpCS2JfybWww6fb8+W9zdFQxyXnNx+NziOXcV+6fwL7P9NEZGRmU1m4Jz3Ex4VQdWEMKfl/lpNXZtJx+E3r0=) 2026-02-09 02:44:40.067099 | orchestrator | 2026-02-09 02:44:40.067104 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:40.067109 | orchestrator | Monday 09 February 2026 02:44:37 +0000 (0:00:01.162) 0:00:21.414 ******* 2026-02-09 02:44:40.067114 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/MLVXx61uf6jPS3HSJXdD+wbRkmpaW9hhJ9YVwBv5lIpPypxOigX5t/aCXAZTsl/cLIArevJkVYfbGQ4Ggjs+nrjIAXI9jw/JPOK2cBRRqy6Q0nrsIUQU92Ai2Oi4Td5CRf9Hg1M+x6OvHAWOimnZ8lyC6StV+ziXKIipuwRrpMZYtHoc2+n9guI4ztGW3yrHnvKljfTb73wKEz80rPKN8Mx1FFliaEF9B4Y9rGrYprapMpKoLNvBw/7UK2YEFDunsWCZibRCKRmi7KTzoWmN8hUFF73fz3/b+S1x9xkCZ3LhhmBKTVtLM9RdbVswbjNEgcHPSCE4qsSlWdAShpUQca2sDewWBxNhQeI1Ivny1sWc1FJ/SK4SLROJRXFQTU4WBtcnDVwflQ9N9Vn7thI9/3CBmDZMgBbfdfuOH6JSEjcfXXY0pn5v9Edl/pft2vZmWuz3XcCD5UjQ3bKSdUA2MoofIk2WkWWsWSB+lvwM9vhm+3/9/dpMrwjg05s/ucM=) 2026-02-09 02:44:40.067119 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO+nIO1AMzzm6qT8LtVR+bDXtRXzsiwLNn6lLlo4FVvZ) 2026-02-09 02:44:40.067124 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDvUdV0ynLwlnrYYErCvAln7oPdwyKVDmY2j5NxQnePfmpa+1UP7Km86W/jDnoKT0eKxmJWWFYyLBrpFfT2AzGk=) 2026-02-09 02:44:40.067129 | orchestrator | 2026-02-09 02:44:40.067134 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:40.067138 | orchestrator | Monday 09 February 2026 02:44:38 +0000 (0:00:01.198) 0:00:22.613 ******* 2026-02-09 02:44:40.067143 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC41uWJXJZGthZSTeMln9xfp5l1L4ZHE1yOBEKwwKUvc7r8LZDAc8QcdZsf5Am3t1P0fd+W2LieXhze0S+gV5p51scgd8l1Mq2Oi1dur2cUPFWGBsOvZxS0HQySYujo8qZSxUFNYbaEpE5Sf3A2rQPDyCpYHTXZelA52IU/GiAms6CYw2sVdec3MEQcPNw4ll80T1DqKTcGomRq/CPtZFs8NbbDBLTY7L0knpTqPvKKRFbIKf1mMygn3jWc2CMJkn2xZE1rp3Zo5a6D2ILaD12XWXQHxAg4jBPMTLd/7twegDdg+Bv/rbIK1V/rElAnJU6wRi8iRPtUSvsCEqycwcchLNXgqQRPz8wa6nITUb+m7t1jKhZ/lC6WhAPP92UHul3si7QVjl7/5Jmfcgv3U0uV+X/Xv1bcIpUg/Q56Yx2tlD/qgQ0fsbJfJHhCrKBd5Hk5ONaApaBKTv7l01erjvmV5YBQJCOUUV5hFyU8rH7xrgjx7J0oPD+ICuEXSuM+0Lk=) 2026-02-09 02:44:40.067148 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOIxwDXaegnvG4M6iNaT4Gl4e/P8kOweIs6EwxirXZpfSVTZDUJRzO4hM4/L7qD8d7bCogdjG0Gb2tGiyGd9fFs=) 2026-02-09 02:44:40.067159 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC70JHo/cwVUjbHHz7VRhaW5/brCOnA3lGVVdeXtJ5qI) 2026-02-09 02:44:44.834919 | orchestrator | 2026-02-09 02:44:44.835045 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:44.835057 | orchestrator | Monday 09 February 2026 02:44:40 +0000 (0:00:01.210) 0:00:23.823 ******* 2026-02-09 02:44:44.835065 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOXWH9D20ijMN1FBTXueOC3/40DNYIrgsGx5HGtK8Ycy) 2026-02-09 02:44:44.835076 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDGrVn1Jev0wYuSL+wqEp+6RPK9WucEubunO1BLzATSPk2DW0e5WSsAGWD3d13vvP0ZYK/MSnN9dCbHfhL8hukwM3nCWPUNLa5kiGDGS9kiOoA5FzO1BOTyUk/E2tZ2ZqEKmJpPMFbMEcWyBooXC9YC/8sVRtRt9KLwfXGYCSoufqVTl8+elCJamzQk+0GK14lH3rtj/AzVcKXdaYmIbb3mAXHIz4fVJtiFhUlI5oYDuwmrzxdZlYzZKm8yah6jQosBCrfWgdevUEYxno+d3INVC27Z9PuaksEEO2aeApJtSYy9BGzNqTavqiqLogx7CnzpYPAwGizKjkI4buQfPNHHFxQEz7sodrPXAiCqbKUoupt6wDsK/kjq+Ys+AibwCbYSJIDrVeT4T+gXfP437FbxgDj/gK6h5FXkMDgISiw3s4Za/jutRERkOwlPN0OPrMzycVGGSQSDaKtHUaEw6yE8y6z0ezH6vXQF4z9ULixK5kUIyLDyO1CNEej+Q+MPy1M=) 2026-02-09 02:44:44.835088 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGJkOe2oJtb3P19ynVyzkrg5IN2XlzEF1QqJPZ8rtub/GifhVCjZoR3g0IoEVEqtHSPcuvkaZwPV0mOTF7VtuBI=) 2026-02-09 02:44:44.835097 | orchestrator | 2026-02-09 02:44:44.835104 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:44.835112 | orchestrator | Monday 09 February 2026 02:44:41 +0000 (0:00:01.159) 0:00:24.982 ******* 2026-02-09 02:44:44.835119 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzlJEHaEB6DvRmEe6MkatLF7NsmDoFhLzd1/ce2/HC3v9k23qgn97ua9AZwyOeDSDPUMK/ic41PNHbsBQHxK6MXGdsa4AR8so/xjPxbYRwMH6XqI7DCbQD1XkB7dkTpmLG0MF1771AKu6PFEEeuRlo44sbJUxRNrbrdJ1htGUjxcDMUKGxUO3qfXXmwSlqZc2H/CDmcOrZ9RgKX6fUwMLB81Dt0RqxSkVwq0kby6kCArLEUQiV5thdWyuFOFQmAYZlKUD8UJmBY46r9NBkwXa6LHBEmMh2OJKhQpQN4/bsLKzMNybDt93x2iDsb1vgKOpUm3bQfrT49iyHvFb7ah3Qbjh76jCaecU+h2Pq37Y3ct7L1M8pV2Wjwwo3QgrnBBL3jtkSNC4ZTAhfLcacfzuITjqm4VIekIVlIlLXFbhw+ZDxUF21Gdl6B64gKpKObqqILpvhuSOc1FQeCAgFRfb5+uyVL8gqohfzguGQMaFOkkyrTQT5TiDgKg07ADbXc78=) 2026-02-09 02:44:44.835127 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBImXufXZLy473pSD1xhyLwVRPicZBZai/voJ87UfT0UbZMuP037MOvmji0Qshwq7i2eH+yIN1YuSnnXpZewxRnk=) 2026-02-09 02:44:44.835134 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIk4y2hEDFCHARSB0hkN9DGX0aY3sBZFjyg6uVukNSEu) 2026-02-09 02:44:44.835142 | orchestrator | 2026-02-09 02:44:44.835148 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-09 02:44:44.835156 | orchestrator | Monday 09 February 2026 02:44:42 +0000 (0:00:01.151) 0:00:26.134 ******* 2026-02-09 02:44:44.835183 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCW/2mDnwi9bqilgj9Ne/LRXEXPODti8TTuq3QCCFmpM0RnLfhkW4qDwwJBkz8oVSksMKfyidxL+2D4oxcAlqlW6jeErGQQ5ZCmjYGjmtlKJ/n0LTaoi8ytUYeOYrRu9IMEP9FKB1kw5C68Tb/B6xAjDISXrVUayWVAEWzDkFjqCjRwsjJ/j314l5RlvzsQqHwW34YJEqj5uLG9gPx5175EdR44N7c9gg5EpO+Ipil+XJxyAAFofQGkM+M4sRTRu09E5bmqp7oi5JWOR1uu2Ku/oosh6OF6rDvkVnOONQZwxmnAsFBM0cd/mUv/kCjwbjANiLoVL/FA96L07vpMJv55MU0aZK3/cg8RAoYLVFrX7byAUyw/ks887+NNwFcXAD1eR+8wkCkPd46A7lqDVkLW1eOlcptWCDq/JRsdxBibhXcB5PG/xKbMS7u4B/k/Q4XHkPHoARDlX3gmU6T5++bBTtsI10XZWnGIsvLunk8+lSKT6tUeK09jXn7jjSK9Z78=) 2026-02-09 02:44:44.835192 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSUAC0eMgcHulsT8CRUPfyVd+8Cf8DNWkstNY5kQAFSqTCKejvH4WPJTUij/sv/AGvCBltdA0+G/l7OYEynV2w=) 2026-02-09 02:44:44.835200 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHPfUHQCicOydVdgQh7gZbJFhpbh5b0Xq1cF57SUe8J/) 2026-02-09 02:44:44.835207 | orchestrator | 2026-02-09 02:44:44.835215 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-09 02:44:44.835242 | orchestrator | Monday 09 February 2026 02:44:43 +0000 (0:00:01.139) 0:00:27.273 ******* 2026-02-09 02:44:44.835248 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-09 02:44:44.835253 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-09 02:44:44.835258 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-09 02:44:44.835263 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-09 02:44:44.835267 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-09 02:44:44.835272 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-09 02:44:44.835276 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-09 02:44:44.835281 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:44:44.835286 | orchestrator | 2026-02-09 02:44:44.835304 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-09 02:44:44.835309 | orchestrator | Monday 09 February 2026 02:44:43 +0000 (0:00:00.179) 0:00:27.453 ******* 2026-02-09 02:44:44.835314 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:44:44.835319 | orchestrator | 2026-02-09 02:44:44.835323 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-09 02:44:44.835328 | orchestrator | Monday 09 February 2026 02:44:43 +0000 (0:00:00.057) 0:00:27.510 ******* 2026-02-09 02:44:44.835336 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:44:44.835340 | orchestrator | 2026-02-09 02:44:44.835345 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-09 02:44:44.835350 | orchestrator | Monday 09 February 2026 02:44:43 +0000 (0:00:00.060) 0:00:27.571 ******* 2026-02-09 02:44:44.835354 | orchestrator | changed: [testbed-manager] 2026-02-09 02:44:44.835359 | orchestrator | 2026-02-09 02:44:44.835363 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:44:44.835368 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 02:44:44.835374 | orchestrator | 2026-02-09 02:44:44.835378 | orchestrator | 2026-02-09 02:44:44.835383 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:44:44.835388 | orchestrator | Monday 09 February 2026 02:44:44 +0000 (0:00:00.789) 0:00:28.361 ******* 2026-02-09 02:44:44.835392 | orchestrator | =============================================================================== 2026-02-09 02:44:44.835397 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.11s 2026-02-09 02:44:44.835401 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.30s 2026-02-09 02:44:44.835407 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-02-09 02:44:44.835411 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-02-09 02:44:44.835416 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-02-09 02:44:44.835420 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-09 02:44:44.835425 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-09 02:44:44.835429 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-09 02:44:44.835434 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-09 02:44:44.835438 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-09 02:44:44.835443 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-09 02:44:44.835448 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-02-09 02:44:44.835453 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-02-09 02:44:44.835458 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-02-09 02:44:44.835469 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-02-09 02:44:44.835475 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-02-09 02:44:44.835480 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.79s 2026-02-09 02:44:44.835486 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-02-09 02:44:44.835492 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-02-09 02:44:44.835498 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-02-09 02:44:45.209115 | orchestrator | + osism apply squid 2026-02-09 02:44:57.377250 | orchestrator | 2026-02-09 02:44:57 | INFO  | Task 06ef258f-9599-41ed-bae5-9e9bf9921d72 (squid) was prepared for execution. 2026-02-09 02:44:57.377381 | orchestrator | 2026-02-09 02:44:57 | INFO  | It takes a moment until task 06ef258f-9599-41ed-bae5-9e9bf9921d72 (squid) has been started and output is visible here. 2026-02-09 02:46:53.301081 | orchestrator | 2026-02-09 02:46:53.301224 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-09 02:46:53.301233 | orchestrator | 2026-02-09 02:46:53.301239 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-09 02:46:53.301244 | orchestrator | Monday 09 February 2026 02:45:01 +0000 (0:00:00.215) 0:00:00.215 ******* 2026-02-09 02:46:53.301250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-09 02:46:53.301256 | orchestrator | 2026-02-09 02:46:53.301261 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-09 02:46:53.301265 | orchestrator | Monday 09 February 2026 02:45:01 +0000 (0:00:00.105) 0:00:00.320 ******* 2026-02-09 02:46:53.301270 | orchestrator | ok: [testbed-manager] 2026-02-09 02:46:53.301276 | orchestrator | 2026-02-09 02:46:53.301281 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-09 02:46:53.301285 | orchestrator | Monday 09 February 2026 02:45:03 +0000 (0:00:01.610) 0:00:01.931 ******* 2026-02-09 02:46:53.301291 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-09 02:46:53.301296 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-09 02:46:53.301301 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-09 02:46:53.301306 | orchestrator | 2026-02-09 02:46:53.301311 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-09 02:46:53.301315 | orchestrator | Monday 09 February 2026 02:45:04 +0000 (0:00:01.273) 0:00:03.205 ******* 2026-02-09 02:46:53.301320 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-09 02:46:53.301325 | orchestrator | 2026-02-09 02:46:53.301329 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-09 02:46:53.301334 | orchestrator | Monday 09 February 2026 02:45:06 +0000 (0:00:01.249) 0:00:04.454 ******* 2026-02-09 02:46:53.301339 | orchestrator | ok: [testbed-manager] 2026-02-09 02:46:53.301352 | orchestrator | 2026-02-09 02:46:53.301357 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-09 02:46:53.301362 | orchestrator | Monday 09 February 2026 02:45:06 +0000 (0:00:00.376) 0:00:04.831 ******* 2026-02-09 02:46:53.301368 | orchestrator | changed: [testbed-manager] 2026-02-09 02:46:53.301373 | orchestrator | 2026-02-09 02:46:53.301378 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-09 02:46:53.301383 | orchestrator | Monday 09 February 2026 02:45:07 +0000 (0:00:01.018) 0:00:05.849 ******* 2026-02-09 02:46:53.301387 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-09 02:46:53.301397 | orchestrator | ok: [testbed-manager] 2026-02-09 02:46:53.301405 | orchestrator | 2026-02-09 02:46:53.301413 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-09 02:46:53.301446 | orchestrator | Monday 09 February 2026 02:45:40 +0000 (0:00:32.579) 0:00:38.429 ******* 2026-02-09 02:46:53.301454 | orchestrator | changed: [testbed-manager] 2026-02-09 02:46:53.301462 | orchestrator | 2026-02-09 02:46:53.301469 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-09 02:46:53.301477 | orchestrator | Monday 09 February 2026 02:45:52 +0000 (0:00:12.067) 0:00:50.497 ******* 2026-02-09 02:46:53.301485 | orchestrator | Pausing for 60 seconds 2026-02-09 02:46:53.301493 | orchestrator | changed: [testbed-manager] 2026-02-09 02:46:53.301501 | orchestrator | 2026-02-09 02:46:53.301510 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-09 02:46:53.301517 | orchestrator | Monday 09 February 2026 02:46:52 +0000 (0:01:00.085) 0:01:50.582 ******* 2026-02-09 02:46:53.301526 | orchestrator | ok: [testbed-manager] 2026-02-09 02:46:53.301534 | orchestrator | 2026-02-09 02:46:53.301542 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-09 02:46:53.301550 | orchestrator | Monday 09 February 2026 02:46:52 +0000 (0:00:00.074) 0:01:50.656 ******* 2026-02-09 02:46:53.301558 | orchestrator | changed: [testbed-manager] 2026-02-09 02:46:53.301566 | orchestrator | 2026-02-09 02:46:53.301573 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:46:53.301578 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:46:53.301582 | orchestrator | 2026-02-09 02:46:53.301587 | orchestrator | 2026-02-09 02:46:53.301592 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:46:53.301597 | orchestrator | Monday 09 February 2026 02:46:52 +0000 (0:00:00.697) 0:01:51.354 ******* 2026-02-09 02:46:53.301601 | orchestrator | =============================================================================== 2026-02-09 02:46:53.301634 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-09 02:46:53.301640 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.58s 2026-02-09 02:46:53.301645 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.07s 2026-02-09 02:46:53.301651 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.61s 2026-02-09 02:46:53.301657 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.27s 2026-02-09 02:46:53.301662 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.25s 2026-02-09 02:46:53.301667 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.02s 2026-02-09 02:46:53.301672 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.70s 2026-02-09 02:46:53.301678 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-02-09 02:46:53.301683 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.11s 2026-02-09 02:46:53.301689 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-02-09 02:46:53.655630 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-09 02:46:53.656172 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-09 02:46:53.698552 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-09 02:46:53.698646 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-09 02:46:53.703482 | orchestrator | + set -e 2026-02-09 02:46:53.703548 | orchestrator | + NAMESPACE=kolla/release 2026-02-09 02:46:53.703562 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-09 02:46:53.710531 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-09 02:46:53.782210 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-09 02:46:53.783103 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-09 02:47:05.913557 | orchestrator | 2026-02-09 02:47:05 | INFO  | Task af60c0bc-f3e8-4595-b673-e6bb70c02505 (operator) was prepared for execution. 2026-02-09 02:47:05.913689 | orchestrator | 2026-02-09 02:47:05 | INFO  | It takes a moment until task af60c0bc-f3e8-4595-b673-e6bb70c02505 (operator) has been started and output is visible here. 2026-02-09 02:47:21.894484 | orchestrator | 2026-02-09 02:47:21.894612 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-09 02:47:21.894633 | orchestrator | 2026-02-09 02:47:21.894647 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-09 02:47:21.894660 | orchestrator | Monday 09 February 2026 02:47:10 +0000 (0:00:00.148) 0:00:00.148 ******* 2026-02-09 02:47:21.894673 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:47:21.894688 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:47:21.894701 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:47:21.894714 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:47:21.894728 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:47:21.894740 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:47:21.894753 | orchestrator | 2026-02-09 02:47:21.894766 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-09 02:47:21.894780 | orchestrator | Monday 09 February 2026 02:47:13 +0000 (0:00:03.248) 0:00:03.396 ******* 2026-02-09 02:47:21.894793 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:47:21.894807 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:47:21.894819 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:47:21.894854 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:47:21.894869 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:47:21.894878 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:47:21.894886 | orchestrator | 2026-02-09 02:47:21.894896 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-09 02:47:21.894911 | orchestrator | 2026-02-09 02:47:21.894924 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-09 02:47:21.894937 | orchestrator | Monday 09 February 2026 02:47:14 +0000 (0:00:00.756) 0:00:04.153 ******* 2026-02-09 02:47:21.894950 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:47:21.894963 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:47:21.894974 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:47:21.894988 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:47:21.895001 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:47:21.895016 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:47:21.895031 | orchestrator | 2026-02-09 02:47:21.895044 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-09 02:47:21.895054 | orchestrator | Monday 09 February 2026 02:47:14 +0000 (0:00:00.168) 0:00:04.322 ******* 2026-02-09 02:47:21.895063 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:47:21.895073 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:47:21.895110 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:47:21.895119 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:47:21.895128 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:47:21.895137 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:47:21.895146 | orchestrator | 2026-02-09 02:47:21.895156 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-09 02:47:21.895166 | orchestrator | Monday 09 February 2026 02:47:14 +0000 (0:00:00.175) 0:00:04.497 ******* 2026-02-09 02:47:21.895175 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:47:21.895186 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:47:21.895195 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:47:21.895204 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:47:21.895213 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:47:21.895223 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:47:21.895232 | orchestrator | 2026-02-09 02:47:21.895241 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-09 02:47:21.895250 | orchestrator | Monday 09 February 2026 02:47:15 +0000 (0:00:00.607) 0:00:05.105 ******* 2026-02-09 02:47:21.895260 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:47:21.895270 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:47:21.895279 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:47:21.895288 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:47:21.895298 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:47:21.895308 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:47:21.895340 | orchestrator | 2026-02-09 02:47:21.895349 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-09 02:47:21.895357 | orchestrator | Monday 09 February 2026 02:47:15 +0000 (0:00:00.769) 0:00:05.874 ******* 2026-02-09 02:47:21.895365 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-09 02:47:21.895373 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-09 02:47:21.895381 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-09 02:47:21.895389 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-09 02:47:21.895397 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-09 02:47:21.895405 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-09 02:47:21.895412 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-09 02:47:21.895420 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-09 02:47:21.895428 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-09 02:47:21.895436 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-09 02:47:21.895444 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-09 02:47:21.895452 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-09 02:47:21.895459 | orchestrator | 2026-02-09 02:47:21.895467 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-09 02:47:21.895475 | orchestrator | Monday 09 February 2026 02:47:17 +0000 (0:00:01.140) 0:00:07.014 ******* 2026-02-09 02:47:21.895483 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:47:21.895491 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:47:21.895499 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:47:21.895507 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:47:21.895518 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:47:21.895532 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:47:21.895547 | orchestrator | 2026-02-09 02:47:21.895561 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-09 02:47:21.895577 | orchestrator | Monday 09 February 2026 02:47:18 +0000 (0:00:01.192) 0:00:08.207 ******* 2026-02-09 02:47:21.895592 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-09 02:47:21.895603 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-09 02:47:21.895611 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-09 02:47:21.895623 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-09 02:47:21.895661 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-09 02:47:21.895676 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-09 02:47:21.895689 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-09 02:47:21.895702 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-09 02:47:21.895715 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-09 02:47:21.895728 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-09 02:47:21.895741 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-09 02:47:21.895754 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-09 02:47:21.895767 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-09 02:47:21.895775 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-09 02:47:21.895782 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-09 02:47:21.895791 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-09 02:47:21.895799 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-09 02:47:21.895806 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-09 02:47:21.895814 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-09 02:47:21.895822 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-09 02:47:21.895838 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-09 02:47:21.895846 | orchestrator | 2026-02-09 02:47:21.895854 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-09 02:47:21.895863 | orchestrator | Monday 09 February 2026 02:47:19 +0000 (0:00:01.199) 0:00:09.407 ******* 2026-02-09 02:47:21.895871 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:47:21.895879 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:47:21.895886 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:47:21.895894 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:47:21.895902 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:47:21.895910 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:47:21.895918 | orchestrator | 2026-02-09 02:47:21.895926 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-09 02:47:21.895934 | orchestrator | Monday 09 February 2026 02:47:19 +0000 (0:00:00.175) 0:00:09.582 ******* 2026-02-09 02:47:21.895941 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:47:21.895949 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:47:21.895957 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:47:21.895965 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:47:21.895972 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:47:21.895980 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:47:21.895988 | orchestrator | 2026-02-09 02:47:21.895996 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-09 02:47:21.896004 | orchestrator | Monday 09 February 2026 02:47:19 +0000 (0:00:00.283) 0:00:09.866 ******* 2026-02-09 02:47:21.896012 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:47:21.896020 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:47:21.896027 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:47:21.896035 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:47:21.896043 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:47:21.896050 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:47:21.896058 | orchestrator | 2026-02-09 02:47:21.896066 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-09 02:47:21.896074 | orchestrator | Monday 09 February 2026 02:47:20 +0000 (0:00:00.598) 0:00:10.465 ******* 2026-02-09 02:47:21.896102 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:47:21.896111 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:47:21.896118 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:47:21.896126 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:47:21.896143 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:47:21.896151 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:47:21.896159 | orchestrator | 2026-02-09 02:47:21.896167 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-09 02:47:21.896175 | orchestrator | Monday 09 February 2026 02:47:20 +0000 (0:00:00.234) 0:00:10.700 ******* 2026-02-09 02:47:21.896183 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-09 02:47:21.896191 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:47:21.896199 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-09 02:47:21.896206 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:47:21.896214 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-09 02:47:21.896222 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:47:21.896230 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-09 02:47:21.896238 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-09 02:47:21.896245 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:47:21.896253 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:47:21.896261 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-09 02:47:21.896269 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:47:21.896277 | orchestrator | 2026-02-09 02:47:21.896285 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-09 02:47:21.896292 | orchestrator | Monday 09 February 2026 02:47:21 +0000 (0:00:00.713) 0:00:11.414 ******* 2026-02-09 02:47:21.896307 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:47:21.896315 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:47:21.896323 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:47:21.896330 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:47:21.896338 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:47:21.896345 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:47:21.896353 | orchestrator | 2026-02-09 02:47:21.896361 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-09 02:47:21.896369 | orchestrator | Monday 09 February 2026 02:47:21 +0000 (0:00:00.188) 0:00:11.602 ******* 2026-02-09 02:47:21.896377 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:47:21.896385 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:47:21.896394 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:47:21.896407 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:47:21.896431 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:47:23.250719 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:47:23.250792 | orchestrator | 2026-02-09 02:47:23.250800 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-09 02:47:23.250807 | orchestrator | Monday 09 February 2026 02:47:21 +0000 (0:00:00.166) 0:00:11.769 ******* 2026-02-09 02:47:23.250812 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:47:23.250817 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:47:23.250822 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:47:23.250827 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:47:23.250832 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:47:23.250837 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:47:23.250841 | orchestrator | 2026-02-09 02:47:23.250846 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-09 02:47:23.250851 | orchestrator | Monday 09 February 2026 02:47:22 +0000 (0:00:00.171) 0:00:11.941 ******* 2026-02-09 02:47:23.250856 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:47:23.250861 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:47:23.250880 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:47:23.250885 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:47:23.250890 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:47:23.250895 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:47:23.250900 | orchestrator | 2026-02-09 02:47:23.250904 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-09 02:47:23.250909 | orchestrator | Monday 09 February 2026 02:47:22 +0000 (0:00:00.623) 0:00:12.565 ******* 2026-02-09 02:47:23.250914 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:47:23.250919 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:47:23.250924 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:47:23.250928 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:47:23.250933 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:47:23.250938 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:47:23.250942 | orchestrator | 2026-02-09 02:47:23.250947 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:47:23.250953 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 02:47:23.250960 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 02:47:23.250965 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 02:47:23.250970 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 02:47:23.250974 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 02:47:23.250996 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 02:47:23.251001 | orchestrator | 2026-02-09 02:47:23.251005 | orchestrator | 2026-02-09 02:47:23.251010 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:47:23.251015 | orchestrator | Monday 09 February 2026 02:47:22 +0000 (0:00:00.278) 0:00:12.843 ******* 2026-02-09 02:47:23.251020 | orchestrator | =============================================================================== 2026-02-09 02:47:23.251025 | orchestrator | Gathering Facts --------------------------------------------------------- 3.25s 2026-02-09 02:47:23.251030 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.20s 2026-02-09 02:47:23.251036 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.19s 2026-02-09 02:47:23.251041 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.14s 2026-02-09 02:47:23.251045 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2026-02-09 02:47:23.251050 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2026-02-09 02:47:23.251055 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-02-09 02:47:23.251060 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.62s 2026-02-09 02:47:23.251065 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2026-02-09 02:47:23.251069 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2026-02-09 02:47:23.251139 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.28s 2026-02-09 02:47:23.251149 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.28s 2026-02-09 02:47:23.251157 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.23s 2026-02-09 02:47:23.251164 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.19s 2026-02-09 02:47:23.251172 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-02-09 02:47:23.251180 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2026-02-09 02:47:23.251188 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-02-09 02:47:23.251195 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-02-09 02:47:23.251203 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2026-02-09 02:47:23.631572 | orchestrator | + osism apply --environment custom facts 2026-02-09 02:47:25.591398 | orchestrator | 2026-02-09 02:47:25 | INFO  | Trying to run play facts in environment custom 2026-02-09 02:47:35.676213 | orchestrator | 2026-02-09 02:47:35 | INFO  | Task 180c76c0-44ce-454c-9e4a-bbbceca24802 (facts) was prepared for execution. 2026-02-09 02:47:35.676300 | orchestrator | 2026-02-09 02:47:35 | INFO  | It takes a moment until task 180c76c0-44ce-454c-9e4a-bbbceca24802 (facts) has been started and output is visible here. 2026-02-09 02:48:16.627068 | orchestrator | 2026-02-09 02:48:16.627194 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-09 02:48:16.627207 | orchestrator | 2026-02-09 02:48:16.627213 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-09 02:48:16.627220 | orchestrator | Monday 09 February 2026 02:47:39 +0000 (0:00:00.090) 0:00:00.090 ******* 2026-02-09 02:48:16.627227 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:16.627235 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:16.627242 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:48:16.627249 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:16.627256 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:48:16.627263 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:16.627295 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:48:16.627301 | orchestrator | 2026-02-09 02:48:16.627305 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-09 02:48:16.627309 | orchestrator | Monday 09 February 2026 02:47:41 +0000 (0:00:01.362) 0:00:01.452 ******* 2026-02-09 02:48:16.627313 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:16.627317 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:16.627321 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:16.627325 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:16.627329 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:48:16.627333 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:48:16.627339 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:48:16.627343 | orchestrator | 2026-02-09 02:48:16.627347 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-09 02:48:16.627351 | orchestrator | 2026-02-09 02:48:16.627355 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-09 02:48:16.627359 | orchestrator | Monday 09 February 2026 02:47:42 +0000 (0:00:01.189) 0:00:02.642 ******* 2026-02-09 02:48:16.627363 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:16.627367 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:16.627371 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:16.627374 | orchestrator | 2026-02-09 02:48:16.627378 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-09 02:48:16.627383 | orchestrator | Monday 09 February 2026 02:47:42 +0000 (0:00:00.139) 0:00:02.781 ******* 2026-02-09 02:48:16.627387 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:16.627391 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:16.627444 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:16.627450 | orchestrator | 2026-02-09 02:48:16.627454 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-09 02:48:16.627458 | orchestrator | Monday 09 February 2026 02:47:42 +0000 (0:00:00.212) 0:00:02.993 ******* 2026-02-09 02:48:16.627462 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:16.627466 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:16.627470 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:16.627474 | orchestrator | 2026-02-09 02:48:16.627478 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-09 02:48:16.627483 | orchestrator | Monday 09 February 2026 02:47:43 +0000 (0:00:00.234) 0:00:03.228 ******* 2026-02-09 02:48:16.627488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 02:48:16.627493 | orchestrator | 2026-02-09 02:48:16.627497 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-09 02:48:16.627501 | orchestrator | Monday 09 February 2026 02:47:43 +0000 (0:00:00.163) 0:00:03.391 ******* 2026-02-09 02:48:16.627505 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:16.627509 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:16.627513 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:16.627517 | orchestrator | 2026-02-09 02:48:16.627521 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-09 02:48:16.627525 | orchestrator | Monday 09 February 2026 02:47:43 +0000 (0:00:00.473) 0:00:03.864 ******* 2026-02-09 02:48:16.627529 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:48:16.627533 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:48:16.627537 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:48:16.627541 | orchestrator | 2026-02-09 02:48:16.627545 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-09 02:48:16.627549 | orchestrator | Monday 09 February 2026 02:47:43 +0000 (0:00:00.147) 0:00:04.012 ******* 2026-02-09 02:48:16.627553 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:16.627556 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:16.627560 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:16.627564 | orchestrator | 2026-02-09 02:48:16.627568 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-09 02:48:16.627577 | orchestrator | Monday 09 February 2026 02:47:44 +0000 (0:00:01.007) 0:00:05.020 ******* 2026-02-09 02:48:16.627581 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:16.627585 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:16.627589 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:16.627593 | orchestrator | 2026-02-09 02:48:16.627597 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-09 02:48:16.627633 | orchestrator | Monday 09 February 2026 02:47:45 +0000 (0:00:00.465) 0:00:05.485 ******* 2026-02-09 02:48:16.627639 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:16.627643 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:16.627648 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:16.627653 | orchestrator | 2026-02-09 02:48:16.627658 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-09 02:48:16.627662 | orchestrator | Monday 09 February 2026 02:47:46 +0000 (0:00:01.017) 0:00:06.503 ******* 2026-02-09 02:48:16.627667 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:16.627671 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:16.627676 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:16.627681 | orchestrator | 2026-02-09 02:48:16.627685 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-09 02:48:16.627690 | orchestrator | Monday 09 February 2026 02:48:00 +0000 (0:00:14.566) 0:00:21.070 ******* 2026-02-09 02:48:16.627694 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:48:16.627699 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:48:16.627703 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:48:16.627708 | orchestrator | 2026-02-09 02:48:16.627712 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-09 02:48:16.627730 | orchestrator | Monday 09 February 2026 02:48:01 +0000 (0:00:00.114) 0:00:21.185 ******* 2026-02-09 02:48:16.627735 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:16.627740 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:16.627744 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:16.627748 | orchestrator | 2026-02-09 02:48:16.627753 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-09 02:48:16.627761 | orchestrator | Monday 09 February 2026 02:48:08 +0000 (0:00:06.995) 0:00:28.180 ******* 2026-02-09 02:48:16.627769 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:16.627776 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:16.627785 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:16.627794 | orchestrator | 2026-02-09 02:48:16.627801 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-09 02:48:16.627808 | orchestrator | Monday 09 February 2026 02:48:08 +0000 (0:00:00.455) 0:00:28.636 ******* 2026-02-09 02:48:16.627815 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-09 02:48:16.627823 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-09 02:48:16.627829 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-09 02:48:16.627836 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-09 02:48:16.627842 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-09 02:48:16.627848 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-09 02:48:16.627855 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-09 02:48:16.627861 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-09 02:48:16.627868 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-09 02:48:16.627876 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-09 02:48:16.627883 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-09 02:48:16.627890 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-09 02:48:16.627897 | orchestrator | 2026-02-09 02:48:16.627903 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-09 02:48:16.627917 | orchestrator | Monday 09 February 2026 02:48:11 +0000 (0:00:03.345) 0:00:31.981 ******* 2026-02-09 02:48:16.627923 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:16.627930 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:16.627937 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:16.627956 | orchestrator | 2026-02-09 02:48:16.627961 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-09 02:48:16.627965 | orchestrator | 2026-02-09 02:48:16.627969 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-09 02:48:16.627973 | orchestrator | Monday 09 February 2026 02:48:13 +0000 (0:00:01.254) 0:00:33.235 ******* 2026-02-09 02:48:16.627983 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:16.628032 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:16.628039 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:16.628045 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:16.628051 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:16.628058 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:16.628062 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:16.628066 | orchestrator | 2026-02-09 02:48:16.628070 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:48:16.628075 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:48:16.628080 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:48:16.628087 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:48:16.628091 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:48:16.628095 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:48:16.628100 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:48:16.628104 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:48:16.628107 | orchestrator | 2026-02-09 02:48:16.628111 | orchestrator | 2026-02-09 02:48:16.628115 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:48:16.628119 | orchestrator | Monday 09 February 2026 02:48:16 +0000 (0:00:03.482) 0:00:36.717 ******* 2026-02-09 02:48:16.628123 | orchestrator | =============================================================================== 2026-02-09 02:48:16.628127 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.57s 2026-02-09 02:48:16.628131 | orchestrator | Install required packages (Debian) -------------------------------------- 7.00s 2026-02-09 02:48:16.628135 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.48s 2026-02-09 02:48:16.628139 | orchestrator | Copy fact files --------------------------------------------------------- 3.35s 2026-02-09 02:48:16.628143 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2026-02-09 02:48:16.628147 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.25s 2026-02-09 02:48:16.628156 | orchestrator | Copy fact file ---------------------------------------------------------- 1.19s 2026-02-09 02:48:16.884136 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2026-02-09 02:48:16.884221 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.01s 2026-02-09 02:48:16.884249 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2026-02-09 02:48:16.884279 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-02-09 02:48:16.884286 | orchestrator | Create custom facts directory ------------------------------------------- 0.46s 2026-02-09 02:48:16.884294 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-02-09 02:48:16.884301 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2026-02-09 02:48:16.884308 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-02-09 02:48:16.884316 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-02-09 02:48:16.884324 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2026-02-09 02:48:16.884331 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-02-09 02:48:17.261058 | orchestrator | + osism apply bootstrap 2026-02-09 02:48:29.379172 | orchestrator | 2026-02-09 02:48:29 | INFO  | Task d536ed30-1422-4bb9-bf85-951f670c51cf (bootstrap) was prepared for execution. 2026-02-09 02:48:29.379287 | orchestrator | 2026-02-09 02:48:29 | INFO  | It takes a moment until task d536ed30-1422-4bb9-bf85-951f670c51cf (bootstrap) has been started and output is visible here. 2026-02-09 02:48:45.302547 | orchestrator | 2026-02-09 02:48:45.302681 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-09 02:48:45.302696 | orchestrator | 2026-02-09 02:48:45.302703 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-09 02:48:45.302710 | orchestrator | Monday 09 February 2026 02:48:33 +0000 (0:00:00.162) 0:00:00.162 ******* 2026-02-09 02:48:45.302716 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:45.302724 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:45.302730 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:45.302737 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:45.302743 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:45.302749 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:45.302756 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:45.302762 | orchestrator | 2026-02-09 02:48:45.302769 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-09 02:48:45.302775 | orchestrator | 2026-02-09 02:48:45.302782 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-09 02:48:45.302788 | orchestrator | Monday 09 February 2026 02:48:34 +0000 (0:00:00.245) 0:00:00.408 ******* 2026-02-09 02:48:45.302795 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:45.302801 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:45.302807 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:45.302813 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:45.302819 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:45.302825 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:45.302832 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:45.302838 | orchestrator | 2026-02-09 02:48:45.302844 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-09 02:48:45.302850 | orchestrator | 2026-02-09 02:48:45.302857 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-09 02:48:45.302863 | orchestrator | Monday 09 February 2026 02:48:37 +0000 (0:00:03.470) 0:00:03.878 ******* 2026-02-09 02:48:45.302871 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-09 02:48:45.302883 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-09 02:48:45.302894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-09 02:48:45.302905 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-09 02:48:45.302915 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-09 02:48:45.302926 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-09 02:48:45.302934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 02:48:45.303012 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-09 02:48:45.303022 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-09 02:48:45.303047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 02:48:45.303054 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-09 02:48:45.303060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 02:48:45.303066 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-09 02:48:45.303072 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-09 02:48:45.303078 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-09 02:48:45.303085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-09 02:48:45.303092 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-09 02:48:45.303098 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-09 02:48:45.303106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-09 02:48:45.303113 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:48:45.303123 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-09 02:48:45.303134 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-09 02:48:45.303145 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-09 02:48:45.303158 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-09 02:48:45.303173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-09 02:48:45.303185 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:48:45.303196 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-09 02:48:45.303206 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-09 02:48:45.303216 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-09 02:48:45.303228 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-09 02:48:45.303240 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-09 02:48:45.303252 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-09 02:48:45.303263 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 02:48:45.303275 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-09 02:48:45.303283 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-09 02:48:45.303291 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 02:48:45.303298 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-09 02:48:45.303305 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-09 02:48:45.303312 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:48:45.303320 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-09 02:48:45.303327 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 02:48:45.303334 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:48:45.303341 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-09 02:48:45.303348 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-09 02:48:45.303355 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-09 02:48:45.303362 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-09 02:48:45.303372 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:48:45.303401 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-09 02:48:45.303411 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-09 02:48:45.303420 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-09 02:48:45.303449 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-09 02:48:45.303460 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-09 02:48:45.303470 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:48:45.303479 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-09 02:48:45.303499 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-09 02:48:45.303508 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:48:45.303517 | orchestrator | 2026-02-09 02:48:45.303527 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-09 02:48:45.303537 | orchestrator | 2026-02-09 02:48:45.303547 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-09 02:48:45.303558 | orchestrator | Monday 09 February 2026 02:48:38 +0000 (0:00:00.487) 0:00:04.366 ******* 2026-02-09 02:48:45.303567 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:45.303577 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:45.303587 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:45.303597 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:45.303607 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:45.303617 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:45.303627 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:45.303638 | orchestrator | 2026-02-09 02:48:45.303648 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-09 02:48:45.303658 | orchestrator | Monday 09 February 2026 02:48:39 +0000 (0:00:01.206) 0:00:05.573 ******* 2026-02-09 02:48:45.303669 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:45.303678 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:45.303689 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:45.303699 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:45.303710 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:45.303721 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:45.303731 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:45.303742 | orchestrator | 2026-02-09 02:48:45.303749 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-09 02:48:45.303755 | orchestrator | Monday 09 February 2026 02:48:40 +0000 (0:00:01.202) 0:00:06.776 ******* 2026-02-09 02:48:45.303763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:48:45.303771 | orchestrator | 2026-02-09 02:48:45.303778 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-09 02:48:45.303784 | orchestrator | Monday 09 February 2026 02:48:40 +0000 (0:00:00.267) 0:00:07.043 ******* 2026-02-09 02:48:45.303790 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:45.303796 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:48:45.303802 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:45.303808 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:45.303814 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:48:45.303821 | orchestrator | changed: [testbed-manager] 2026-02-09 02:48:45.303827 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:48:45.303833 | orchestrator | 2026-02-09 02:48:45.303839 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-09 02:48:45.303845 | orchestrator | Monday 09 February 2026 02:48:42 +0000 (0:00:02.101) 0:00:09.144 ******* 2026-02-09 02:48:45.303851 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:48:45.303859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:48:45.303867 | orchestrator | 2026-02-09 02:48:45.303873 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-09 02:48:45.303880 | orchestrator | Monday 09 February 2026 02:48:43 +0000 (0:00:00.290) 0:00:09.435 ******* 2026-02-09 02:48:45.303886 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:45.303892 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:45.303898 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:45.303905 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:48:45.303910 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:48:45.303917 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:48:45.303929 | orchestrator | 2026-02-09 02:48:45.303962 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-09 02:48:45.303970 | orchestrator | Monday 09 February 2026 02:48:44 +0000 (0:00:01.013) 0:00:10.448 ******* 2026-02-09 02:48:45.303977 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:48:45.303983 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:45.303989 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:45.303995 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:45.304001 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:48:45.304010 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:48:45.304020 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:48:45.304029 | orchestrator | 2026-02-09 02:48:45.304039 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-09 02:48:45.304049 | orchestrator | Monday 09 February 2026 02:48:44 +0000 (0:00:00.568) 0:00:11.016 ******* 2026-02-09 02:48:45.304060 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:48:45.304070 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:48:45.304080 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:48:45.304091 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:48:45.304097 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:48:45.304103 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:48:45.304110 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:45.304116 | orchestrator | 2026-02-09 02:48:45.304122 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-09 02:48:45.304129 | orchestrator | Monday 09 February 2026 02:48:45 +0000 (0:00:00.456) 0:00:11.473 ******* 2026-02-09 02:48:45.304136 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:48:45.304142 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:48:45.304156 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:48:57.496267 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:48:57.496378 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:48:57.496392 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:48:57.496403 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:48:57.496413 | orchestrator | 2026-02-09 02:48:57.496424 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-09 02:48:57.496435 | orchestrator | Monday 09 February 2026 02:48:45 +0000 (0:00:00.215) 0:00:11.688 ******* 2026-02-09 02:48:57.496447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:48:57.496473 | orchestrator | 2026-02-09 02:48:57.496484 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-09 02:48:57.496494 | orchestrator | Monday 09 February 2026 02:48:45 +0000 (0:00:00.301) 0:00:11.989 ******* 2026-02-09 02:48:57.496505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:48:57.496515 | orchestrator | 2026-02-09 02:48:57.496525 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-09 02:48:57.496535 | orchestrator | Monday 09 February 2026 02:48:45 +0000 (0:00:00.303) 0:00:12.292 ******* 2026-02-09 02:48:57.496545 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:57.496556 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:57.496566 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.496575 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:57.496585 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:57.496597 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:57.496613 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:57.496640 | orchestrator | 2026-02-09 02:48:57.496657 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-09 02:48:57.496672 | orchestrator | Monday 09 February 2026 02:48:47 +0000 (0:00:01.485) 0:00:13.778 ******* 2026-02-09 02:48:57.496720 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:48:57.496736 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:48:57.496752 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:48:57.496769 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:48:57.496785 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:48:57.496801 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:48:57.496817 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:48:57.496835 | orchestrator | 2026-02-09 02:48:57.496853 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-09 02:48:57.496870 | orchestrator | Monday 09 February 2026 02:48:47 +0000 (0:00:00.348) 0:00:14.126 ******* 2026-02-09 02:48:57.496884 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.496895 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:57.496906 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:57.496917 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:57.496966 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:57.496988 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:57.497011 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:57.497025 | orchestrator | 2026-02-09 02:48:57.497042 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-09 02:48:57.497057 | orchestrator | Monday 09 February 2026 02:48:48 +0000 (0:00:00.610) 0:00:14.736 ******* 2026-02-09 02:48:57.497073 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:48:57.497090 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:48:57.497104 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:48:57.497121 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:48:57.497137 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:48:57.497153 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:48:57.497170 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:48:57.497187 | orchestrator | 2026-02-09 02:48:57.497205 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-09 02:48:57.497222 | orchestrator | Monday 09 February 2026 02:48:48 +0000 (0:00:00.264) 0:00:15.001 ******* 2026-02-09 02:48:57.497238 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:57.497249 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.497258 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:57.497268 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:57.497277 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:48:57.497287 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:48:57.497308 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:48:57.497318 | orchestrator | 2026-02-09 02:48:57.497328 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-09 02:48:57.497337 | orchestrator | Monday 09 February 2026 02:48:49 +0000 (0:00:00.548) 0:00:15.549 ******* 2026-02-09 02:48:57.497347 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.497357 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:57.497366 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:57.497376 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:57.497385 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:48:57.497395 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:48:57.497404 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:48:57.497414 | orchestrator | 2026-02-09 02:48:57.497424 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-09 02:48:57.497433 | orchestrator | Monday 09 February 2026 02:48:50 +0000 (0:00:01.156) 0:00:16.705 ******* 2026-02-09 02:48:57.497496 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:57.497519 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:57.497534 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:57.497549 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.497563 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:57.497578 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:57.497592 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:57.497625 | orchestrator | 2026-02-09 02:48:57.497657 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-09 02:48:57.497691 | orchestrator | Monday 09 February 2026 02:48:51 +0000 (0:00:01.008) 0:00:17.714 ******* 2026-02-09 02:48:57.497734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:48:57.497750 | orchestrator | 2026-02-09 02:48:57.497761 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-09 02:48:57.497771 | orchestrator | Monday 09 February 2026 02:48:51 +0000 (0:00:00.329) 0:00:18.044 ******* 2026-02-09 02:48:57.497780 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:48:57.497790 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:48:57.497799 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:48:57.497809 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:48:57.497818 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:48:57.497828 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:48:57.497837 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:48:57.497847 | orchestrator | 2026-02-09 02:48:57.497857 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-09 02:48:57.497866 | orchestrator | Monday 09 February 2026 02:48:52 +0000 (0:00:01.240) 0:00:19.284 ******* 2026-02-09 02:48:57.497876 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.497885 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:57.497895 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:57.497909 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:57.498114 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:57.498242 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:57.498265 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:57.498282 | orchestrator | 2026-02-09 02:48:57.498297 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-09 02:48:57.498311 | orchestrator | Monday 09 February 2026 02:48:53 +0000 (0:00:00.240) 0:00:19.524 ******* 2026-02-09 02:48:57.498324 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.498338 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:57.498350 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:57.498364 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:57.498376 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:57.498390 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:57.498403 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:57.498416 | orchestrator | 2026-02-09 02:48:57.498427 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-09 02:48:57.498435 | orchestrator | Monday 09 February 2026 02:48:53 +0000 (0:00:00.234) 0:00:19.759 ******* 2026-02-09 02:48:57.498443 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.498454 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:57.498467 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:57.498478 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:57.498495 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:57.498510 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:57.498523 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:57.498535 | orchestrator | 2026-02-09 02:48:57.498548 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-09 02:48:57.498561 | orchestrator | Monday 09 February 2026 02:48:53 +0000 (0:00:00.220) 0:00:19.980 ******* 2026-02-09 02:48:57.498575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:48:57.498591 | orchestrator | 2026-02-09 02:48:57.498604 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-09 02:48:57.498617 | orchestrator | Monday 09 February 2026 02:48:53 +0000 (0:00:00.279) 0:00:20.259 ******* 2026-02-09 02:48:57.498631 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.498645 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:57.498674 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:57.498683 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:57.498690 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:57.498698 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:57.498706 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:57.498714 | orchestrator | 2026-02-09 02:48:57.498722 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-09 02:48:57.498730 | orchestrator | Monday 09 February 2026 02:48:54 +0000 (0:00:00.539) 0:00:20.798 ******* 2026-02-09 02:48:57.498738 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:48:57.498746 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:48:57.498754 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:48:57.498761 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:48:57.498769 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:48:57.498777 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:48:57.498785 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:48:57.498793 | orchestrator | 2026-02-09 02:48:57.498801 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-09 02:48:57.498809 | orchestrator | Monday 09 February 2026 02:48:54 +0000 (0:00:00.312) 0:00:21.111 ******* 2026-02-09 02:48:57.498817 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.498825 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:57.498833 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:57.498841 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:57.498849 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:48:57.498857 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:48:57.498865 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:48:57.498873 | orchestrator | 2026-02-09 02:48:57.498881 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-09 02:48:57.498889 | orchestrator | Monday 09 February 2026 02:48:55 +0000 (0:00:01.051) 0:00:22.162 ******* 2026-02-09 02:48:57.498897 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.498904 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:57.498912 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:57.498920 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:57.498955 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:48:57.498964 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:48:57.498982 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:48:57.498990 | orchestrator | 2026-02-09 02:48:57.498999 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-09 02:48:57.499007 | orchestrator | Monday 09 February 2026 02:48:56 +0000 (0:00:00.532) 0:00:22.694 ******* 2026-02-09 02:48:57.499014 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:48:57.499022 | orchestrator | ok: [testbed-manager] 2026-02-09 02:48:57.499030 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:48:57.499038 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:48:57.499059 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:49:38.460327 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:49:38.460440 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:49:38.460458 | orchestrator | 2026-02-09 02:49:38.460471 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-09 02:49:38.460484 | orchestrator | Monday 09 February 2026 02:48:57 +0000 (0:00:01.095) 0:00:23.790 ******* 2026-02-09 02:49:38.460495 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:49:38.460506 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:49:38.460516 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:49:38.460527 | orchestrator | changed: [testbed-manager] 2026-02-09 02:49:38.460539 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:49:38.460549 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:49:38.460561 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:49:38.460572 | orchestrator | 2026-02-09 02:49:38.460583 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-09 02:49:38.460594 | orchestrator | Monday 09 February 2026 02:49:12 +0000 (0:00:15.449) 0:00:39.239 ******* 2026-02-09 02:49:38.460604 | orchestrator | ok: [testbed-manager] 2026-02-09 02:49:38.460636 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:49:38.460646 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:49:38.460656 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:49:38.460666 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:49:38.460677 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:49:38.460687 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:49:38.460697 | orchestrator | 2026-02-09 02:49:38.460706 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-09 02:49:38.460713 | orchestrator | Monday 09 February 2026 02:49:13 +0000 (0:00:00.276) 0:00:39.516 ******* 2026-02-09 02:49:38.460719 | orchestrator | ok: [testbed-manager] 2026-02-09 02:49:38.460725 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:49:38.460731 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:49:38.460738 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:49:38.460744 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:49:38.460750 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:49:38.460756 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:49:38.460762 | orchestrator | 2026-02-09 02:49:38.460768 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-09 02:49:38.460775 | orchestrator | Monday 09 February 2026 02:49:13 +0000 (0:00:00.261) 0:00:39.777 ******* 2026-02-09 02:49:38.460781 | orchestrator | ok: [testbed-manager] 2026-02-09 02:49:38.460787 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:49:38.460793 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:49:38.460799 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:49:38.460805 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:49:38.460812 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:49:38.460818 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:49:38.460825 | orchestrator | 2026-02-09 02:49:38.460831 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-09 02:49:38.460837 | orchestrator | Monday 09 February 2026 02:49:13 +0000 (0:00:00.249) 0:00:40.026 ******* 2026-02-09 02:49:38.460845 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:49:38.460854 | orchestrator | 2026-02-09 02:49:38.460894 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-09 02:49:38.460902 | orchestrator | Monday 09 February 2026 02:49:14 +0000 (0:00:00.327) 0:00:40.354 ******* 2026-02-09 02:49:38.460910 | orchestrator | ok: [testbed-manager] 2026-02-09 02:49:38.460917 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:49:38.460924 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:49:38.460931 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:49:38.460939 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:49:38.460946 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:49:38.460957 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:49:38.460968 | orchestrator | 2026-02-09 02:49:38.460978 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-09 02:49:38.460989 | orchestrator | Monday 09 February 2026 02:49:15 +0000 (0:00:01.707) 0:00:42.061 ******* 2026-02-09 02:49:38.460998 | orchestrator | changed: [testbed-manager] 2026-02-09 02:49:38.461010 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:49:38.461020 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:49:38.461032 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:49:38.461044 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:49:38.461054 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:49:38.461064 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:49:38.461072 | orchestrator | 2026-02-09 02:49:38.461079 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-09 02:49:38.461099 | orchestrator | Monday 09 February 2026 02:49:16 +0000 (0:00:01.075) 0:00:43.137 ******* 2026-02-09 02:49:38.461107 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:49:38.461115 | orchestrator | ok: [testbed-manager] 2026-02-09 02:49:38.461122 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:49:38.461136 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:49:38.461143 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:49:38.461150 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:49:38.461157 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:49:38.461164 | orchestrator | 2026-02-09 02:49:38.461171 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-09 02:49:38.461179 | orchestrator | Monday 09 February 2026 02:49:17 +0000 (0:00:00.797) 0:00:43.934 ******* 2026-02-09 02:49:38.461187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:49:38.461196 | orchestrator | 2026-02-09 02:49:38.461204 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-09 02:49:38.461212 | orchestrator | Monday 09 February 2026 02:49:17 +0000 (0:00:00.355) 0:00:44.290 ******* 2026-02-09 02:49:38.461219 | orchestrator | changed: [testbed-manager] 2026-02-09 02:49:38.461227 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:49:38.461234 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:49:38.461241 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:49:38.461248 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:49:38.461254 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:49:38.461260 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:49:38.461266 | orchestrator | 2026-02-09 02:49:38.461288 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-09 02:49:38.461295 | orchestrator | Monday 09 February 2026 02:49:19 +0000 (0:00:01.031) 0:00:45.321 ******* 2026-02-09 02:49:38.461301 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:49:38.461307 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:49:38.461313 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:49:38.461320 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:49:38.461326 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:49:38.461332 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:49:38.461338 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:49:38.461344 | orchestrator | 2026-02-09 02:49:38.461350 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-09 02:49:38.461357 | orchestrator | Monday 09 February 2026 02:49:19 +0000 (0:00:00.253) 0:00:45.575 ******* 2026-02-09 02:49:38.461367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:49:38.461377 | orchestrator | 2026-02-09 02:49:38.461387 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-09 02:49:38.461397 | orchestrator | Monday 09 February 2026 02:49:19 +0000 (0:00:00.312) 0:00:45.888 ******* 2026-02-09 02:49:38.461407 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:49:38.461417 | orchestrator | ok: [testbed-manager] 2026-02-09 02:49:38.461428 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:49:38.461438 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:49:38.461449 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:49:38.461458 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:49:38.461468 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:49:38.461479 | orchestrator | 2026-02-09 02:49:38.461489 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-09 02:49:38.461501 | orchestrator | Monday 09 February 2026 02:49:21 +0000 (0:00:01.667) 0:00:47.555 ******* 2026-02-09 02:49:38.461507 | orchestrator | changed: [testbed-manager] 2026-02-09 02:49:38.461513 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:49:38.461520 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:49:38.461526 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:49:38.461532 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:49:38.461538 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:49:38.461544 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:49:38.461556 | orchestrator | 2026-02-09 02:49:38.461563 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-09 02:49:38.461569 | orchestrator | Monday 09 February 2026 02:49:22 +0000 (0:00:01.164) 0:00:48.720 ******* 2026-02-09 02:49:38.461575 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:49:38.461581 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:49:38.461597 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:49:38.461604 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:49:38.461610 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:49:38.461616 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:49:38.461622 | orchestrator | changed: [testbed-manager] 2026-02-09 02:49:38.461629 | orchestrator | 2026-02-09 02:49:38.461635 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-09 02:49:38.461641 | orchestrator | Monday 09 February 2026 02:49:35 +0000 (0:00:13.201) 0:01:01.921 ******* 2026-02-09 02:49:38.461647 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:49:38.461654 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:49:38.461660 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:49:38.461666 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:49:38.461672 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:49:38.461678 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:49:38.461684 | orchestrator | ok: [testbed-manager] 2026-02-09 02:49:38.461691 | orchestrator | 2026-02-09 02:49:38.461697 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-09 02:49:38.461707 | orchestrator | Monday 09 February 2026 02:49:36 +0000 (0:00:01.050) 0:01:02.972 ******* 2026-02-09 02:49:38.461717 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:49:38.461727 | orchestrator | ok: [testbed-manager] 2026-02-09 02:49:38.461737 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:49:38.461746 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:49:38.461756 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:49:38.461764 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:49:38.461774 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:49:38.461784 | orchestrator | 2026-02-09 02:49:38.461795 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-09 02:49:38.461806 | orchestrator | Monday 09 February 2026 02:49:37 +0000 (0:00:00.983) 0:01:03.955 ******* 2026-02-09 02:49:38.461822 | orchestrator | ok: [testbed-manager] 2026-02-09 02:49:38.461831 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:49:38.461838 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:49:38.461844 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:49:38.461850 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:49:38.461856 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:49:38.461881 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:49:38.461888 | orchestrator | 2026-02-09 02:49:38.461895 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-09 02:49:38.461901 | orchestrator | Monday 09 February 2026 02:49:37 +0000 (0:00:00.224) 0:01:04.180 ******* 2026-02-09 02:49:38.461908 | orchestrator | ok: [testbed-manager] 2026-02-09 02:49:38.461914 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:49:38.461920 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:49:38.461926 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:49:38.461932 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:49:38.461938 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:49:38.461944 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:49:38.461951 | orchestrator | 2026-02-09 02:49:38.461957 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-09 02:49:38.461963 | orchestrator | Monday 09 February 2026 02:49:38 +0000 (0:00:00.253) 0:01:04.433 ******* 2026-02-09 02:49:38.461970 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:49:38.461977 | orchestrator | 2026-02-09 02:49:38.461991 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-09 02:52:09.289913 | orchestrator | Monday 09 February 2026 02:49:38 +0000 (0:00:00.326) 0:01:04.760 ******* 2026-02-09 02:52:09.290064 | orchestrator | ok: [testbed-manager] 2026-02-09 02:52:09.290076 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:09.290083 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:09.290089 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:09.290095 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:09.290101 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:09.290107 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:09.290113 | orchestrator | 2026-02-09 02:52:09.290119 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-09 02:52:09.290126 | orchestrator | Monday 09 February 2026 02:49:40 +0000 (0:00:01.681) 0:01:06.441 ******* 2026-02-09 02:52:09.290132 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:52:09.290139 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:52:09.290145 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:52:09.290151 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:52:09.290156 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:52:09.290162 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:52:09.290168 | orchestrator | changed: [testbed-manager] 2026-02-09 02:52:09.290173 | orchestrator | 2026-02-09 02:52:09.290179 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-09 02:52:09.290186 | orchestrator | Monday 09 February 2026 02:49:40 +0000 (0:00:00.576) 0:01:07.018 ******* 2026-02-09 02:52:09.290192 | orchestrator | ok: [testbed-manager] 2026-02-09 02:52:09.290197 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:09.290203 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:09.290209 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:09.290214 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:09.290220 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:09.290226 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:09.290231 | orchestrator | 2026-02-09 02:52:09.290238 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-09 02:52:09.290244 | orchestrator | Monday 09 February 2026 02:49:40 +0000 (0:00:00.252) 0:01:07.270 ******* 2026-02-09 02:52:09.290250 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:09.290255 | orchestrator | ok: [testbed-manager] 2026-02-09 02:52:09.290261 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:09.290267 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:09.290272 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:09.290278 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:09.290284 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:09.290289 | orchestrator | 2026-02-09 02:52:09.290295 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-09 02:52:09.290301 | orchestrator | Monday 09 February 2026 02:49:42 +0000 (0:00:01.226) 0:01:08.497 ******* 2026-02-09 02:52:09.290307 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:52:09.290312 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:52:09.290318 | orchestrator | changed: [testbed-manager] 2026-02-09 02:52:09.290324 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:52:09.290329 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:52:09.290335 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:52:09.290341 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:52:09.290346 | orchestrator | 2026-02-09 02:52:09.290355 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-09 02:52:09.290361 | orchestrator | Monday 09 February 2026 02:49:43 +0000 (0:00:01.756) 0:01:10.254 ******* 2026-02-09 02:52:09.290367 | orchestrator | ok: [testbed-manager] 2026-02-09 02:52:09.290373 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:09.290378 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:09.290384 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:09.290390 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:09.290395 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:09.290401 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:09.290407 | orchestrator | 2026-02-09 02:52:09.290412 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-09 02:52:09.290435 | orchestrator | Monday 09 February 2026 02:49:46 +0000 (0:00:02.403) 0:01:12.657 ******* 2026-02-09 02:52:09.290441 | orchestrator | ok: [testbed-manager] 2026-02-09 02:52:09.290447 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:09.290452 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:09.290458 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:09.290464 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:09.290478 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:09.290484 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:09.290496 | orchestrator | 2026-02-09 02:52:09.290502 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-09 02:52:09.290508 | orchestrator | Monday 09 February 2026 02:50:33 +0000 (0:00:47.412) 0:02:00.070 ******* 2026-02-09 02:52:09.290514 | orchestrator | changed: [testbed-manager] 2026-02-09 02:52:09.290520 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:52:09.290525 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:52:09.290531 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:52:09.290537 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:52:09.290543 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:52:09.290548 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:52:09.290554 | orchestrator | 2026-02-09 02:52:09.290560 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-09 02:52:09.290566 | orchestrator | Monday 09 February 2026 02:51:54 +0000 (0:01:20.298) 0:03:20.369 ******* 2026-02-09 02:52:09.290572 | orchestrator | ok: [testbed-manager] 2026-02-09 02:52:09.290578 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:09.290584 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:09.290589 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:09.290595 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:09.290601 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:09.290606 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:09.290612 | orchestrator | 2026-02-09 02:52:09.290618 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-09 02:52:09.290624 | orchestrator | Monday 09 February 2026 02:51:55 +0000 (0:00:01.707) 0:03:22.076 ******* 2026-02-09 02:52:09.290629 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:09.290635 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:09.290641 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:09.290646 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:09.290652 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:09.290658 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:09.290663 | orchestrator | changed: [testbed-manager] 2026-02-09 02:52:09.290669 | orchestrator | 2026-02-09 02:52:09.290675 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-09 02:52:09.290719 | orchestrator | Monday 09 February 2026 02:52:08 +0000 (0:00:12.307) 0:03:34.384 ******* 2026-02-09 02:52:09.290764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-09 02:52:09.290787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-09 02:52:09.290802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-09 02:52:09.290810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-09 02:52:09.290816 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-09 02:52:09.290822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-09 02:52:09.290828 | orchestrator | 2026-02-09 02:52:09.290834 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-09 02:52:09.290840 | orchestrator | Monday 09 February 2026 02:52:08 +0000 (0:00:00.384) 0:03:34.768 ******* 2026-02-09 02:52:09.290846 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-09 02:52:09.290852 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:52:09.290857 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-09 02:52:09.290863 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-09 02:52:09.290869 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:52:09.290879 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-09 02:52:09.290889 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:52:09.290898 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:52:09.290910 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-09 02:52:09.290924 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-09 02:52:09.290932 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-09 02:52:09.290942 | orchestrator | 2026-02-09 02:52:09.290951 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-09 02:52:09.290960 | orchestrator | Monday 09 February 2026 02:52:09 +0000 (0:00:00.744) 0:03:35.512 ******* 2026-02-09 02:52:09.290968 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-09 02:52:09.290993 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-09 02:52:09.291002 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-09 02:52:09.291011 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-09 02:52:09.291021 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-09 02:52:09.291038 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-09 02:52:14.933499 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-09 02:52:14.933610 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-09 02:52:14.933640 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-09 02:52:14.933764 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-09 02:52:14.933781 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-09 02:52:14.933790 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-09 02:52:14.933798 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-09 02:52:14.933805 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-09 02:52:14.933813 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-09 02:52:14.933821 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-09 02:52:14.933828 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-09 02:52:14.933836 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-09 02:52:14.933843 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-09 02:52:14.933850 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-09 02:52:14.933858 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-09 02:52:14.933865 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-09 02:52:14.933873 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-09 02:52:14.933880 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-09 02:52:14.933887 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-09 02:52:14.933895 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:52:14.933904 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-09 02:52:14.933911 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-09 02:52:14.933919 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-09 02:52:14.933926 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-09 02:52:14.933933 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-09 02:52:14.933940 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-09 02:52:14.933948 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-09 02:52:14.933955 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-09 02:52:14.933963 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:52:14.933970 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-09 02:52:14.933989 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-09 02:52:14.933997 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-09 02:52:14.934005 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-09 02:52:14.934050 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-09 02:52:14.934060 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-09 02:52:14.934078 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-09 02:52:14.934087 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:52:14.934096 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:52:14.934104 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-09 02:52:14.934112 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-09 02:52:14.934121 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-09 02:52:14.934129 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-09 02:52:14.934138 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-09 02:52:14.934163 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-09 02:52:14.934173 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-09 02:52:14.934181 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-09 02:52:14.934190 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-09 02:52:14.934198 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-09 02:52:14.934207 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-09 02:52:14.934215 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-09 02:52:14.934223 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-09 02:52:14.934232 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-09 02:52:14.934244 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-09 02:52:14.934256 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-09 02:52:14.934269 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-09 02:52:14.934281 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-09 02:52:14.934293 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-09 02:52:14.934305 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-09 02:52:14.934317 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-09 02:52:14.934328 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-09 02:52:14.934339 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-09 02:52:14.934349 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-09 02:52:14.934363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-09 02:52:14.934374 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-09 02:52:14.934386 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-09 02:52:14.934398 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-09 02:52:14.934410 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-09 02:52:14.934426 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-09 02:52:14.934449 | orchestrator | 2026-02-09 02:52:14.934464 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-09 02:52:14.934477 | orchestrator | Monday 09 February 2026 02:52:13 +0000 (0:00:04.627) 0:03:40.140 ******* 2026-02-09 02:52:14.934490 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-09 02:52:14.934517 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-09 02:52:14.934530 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-09 02:52:14.934541 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-09 02:52:14.934561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-09 02:52:14.934573 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-09 02:52:14.934586 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-09 02:52:14.934598 | orchestrator | 2026-02-09 02:52:14.934610 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-09 02:52:14.934623 | orchestrator | Monday 09 February 2026 02:52:14 +0000 (0:00:00.566) 0:03:40.706 ******* 2026-02-09 02:52:14.934633 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-09 02:52:14.934641 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:52:14.934648 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-09 02:52:14.934655 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-09 02:52:14.934662 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:52:14.934670 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:52:14.934704 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-09 02:52:14.934712 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:52:14.934719 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-09 02:52:14.934727 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-09 02:52:14.934744 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-09 02:52:27.755405 | orchestrator | 2026-02-09 02:52:27.755518 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-09 02:52:27.755532 | orchestrator | Monday 09 February 2026 02:52:14 +0000 (0:00:00.524) 0:03:41.231 ******* 2026-02-09 02:52:27.755543 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-09 02:52:27.755554 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-09 02:52:27.755565 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:52:27.755576 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-09 02:52:27.755586 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:52:27.755596 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-09 02:52:27.755605 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:52:27.755615 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:52:27.755625 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-09 02:52:27.755635 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-09 02:52:27.755644 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-09 02:52:27.755654 | orchestrator | 2026-02-09 02:52:27.755664 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-09 02:52:27.755757 | orchestrator | Monday 09 February 2026 02:52:15 +0000 (0:00:00.588) 0:03:41.819 ******* 2026-02-09 02:52:27.755768 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-09 02:52:27.755778 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:52:27.755791 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-09 02:52:27.755809 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:52:27.755826 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-09 02:52:27.755843 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:52:27.755862 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-09 02:52:27.755879 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:52:27.755896 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-09 02:52:27.755906 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-09 02:52:27.755916 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-09 02:52:27.755926 | orchestrator | 2026-02-09 02:52:27.755936 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-09 02:52:27.755947 | orchestrator | Monday 09 February 2026 02:52:16 +0000 (0:00:00.564) 0:03:42.384 ******* 2026-02-09 02:52:27.755959 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:52:27.755971 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:52:27.755982 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:52:27.755993 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:52:27.756004 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:52:27.756015 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:52:27.756026 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:52:27.756037 | orchestrator | 2026-02-09 02:52:27.756048 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-09 02:52:27.756060 | orchestrator | Monday 09 February 2026 02:52:16 +0000 (0:00:00.339) 0:03:42.724 ******* 2026-02-09 02:52:27.756071 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:27.756083 | orchestrator | ok: [testbed-manager] 2026-02-09 02:52:27.756094 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:27.756105 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:27.756116 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:27.756128 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:27.756140 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:27.756151 | orchestrator | 2026-02-09 02:52:27.756164 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-09 02:52:27.756177 | orchestrator | Monday 09 February 2026 02:52:21 +0000 (0:00:05.567) 0:03:48.291 ******* 2026-02-09 02:52:27.756189 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-09 02:52:27.756203 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-09 02:52:27.756215 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:52:27.756227 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-09 02:52:27.756240 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:52:27.756253 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-09 02:52:27.756265 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:52:27.756277 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-09 02:52:27.756290 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:52:27.756303 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-09 02:52:27.756335 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:52:27.756347 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:52:27.756358 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-09 02:52:27.756369 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:52:27.756380 | orchestrator | 2026-02-09 02:52:27.756443 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-09 02:52:27.756455 | orchestrator | Monday 09 February 2026 02:52:22 +0000 (0:00:00.334) 0:03:48.625 ******* 2026-02-09 02:52:27.756466 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-09 02:52:27.756477 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-09 02:52:27.756489 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-09 02:52:27.756532 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-09 02:52:27.756544 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-09 02:52:27.756555 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-09 02:52:27.756566 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-09 02:52:27.756577 | orchestrator | 2026-02-09 02:52:27.756588 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-09 02:52:27.756599 | orchestrator | Monday 09 February 2026 02:52:23 +0000 (0:00:01.083) 0:03:49.708 ******* 2026-02-09 02:52:27.756612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:52:27.756625 | orchestrator | 2026-02-09 02:52:27.756636 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-09 02:52:27.756648 | orchestrator | Monday 09 February 2026 02:52:23 +0000 (0:00:00.440) 0:03:50.149 ******* 2026-02-09 02:52:27.756658 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:27.756696 | orchestrator | ok: [testbed-manager] 2026-02-09 02:52:27.756709 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:27.756720 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:27.756730 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:27.756741 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:27.756752 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:27.756762 | orchestrator | 2026-02-09 02:52:27.756773 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-09 02:52:27.756784 | orchestrator | Monday 09 February 2026 02:52:25 +0000 (0:00:01.187) 0:03:51.336 ******* 2026-02-09 02:52:27.756795 | orchestrator | ok: [testbed-manager] 2026-02-09 02:52:27.756806 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:27.756817 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:27.756827 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:27.756838 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:27.756849 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:27.756860 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:27.756870 | orchestrator | 2026-02-09 02:52:27.756881 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-09 02:52:27.756892 | orchestrator | Monday 09 February 2026 02:52:25 +0000 (0:00:00.602) 0:03:51.939 ******* 2026-02-09 02:52:27.756903 | orchestrator | changed: [testbed-manager] 2026-02-09 02:52:27.756914 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:52:27.756925 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:52:27.756936 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:52:27.756947 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:52:27.756958 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:52:27.756969 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:52:27.756980 | orchestrator | 2026-02-09 02:52:27.756990 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-09 02:52:27.757001 | orchestrator | Monday 09 February 2026 02:52:26 +0000 (0:00:00.637) 0:03:52.576 ******* 2026-02-09 02:52:27.757012 | orchestrator | ok: [testbed-manager] 2026-02-09 02:52:27.757023 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:27.757034 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:27.757045 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:27.757056 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:27.757066 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:27.757077 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:27.757088 | orchestrator | 2026-02-09 02:52:27.757099 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-09 02:52:27.757119 | orchestrator | Monday 09 February 2026 02:52:26 +0000 (0:00:00.563) 0:03:53.140 ******* 2026-02-09 02:52:27.757141 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770604003.9894245, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:27.757157 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770604051.7989564, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:27.757169 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770604016.1056828, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:27.757203 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770604016.054543, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:32.371479 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770604046.5566297, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:32.371581 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770604030.4634378, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:32.371596 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770604037.830918, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:32.371634 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:32.371661 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:32.371740 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:32.371752 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:32.371792 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:32.371806 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:32.371817 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 02:52:32.371838 | orchestrator | 2026-02-09 02:52:32.371851 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-09 02:52:32.371863 | orchestrator | Monday 09 February 2026 02:52:27 +0000 (0:00:00.914) 0:03:54.055 ******* 2026-02-09 02:52:32.371875 | orchestrator | changed: [testbed-manager] 2026-02-09 02:52:32.371887 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:52:32.371898 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:52:32.371908 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:52:32.371920 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:52:32.371931 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:52:32.371942 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:52:32.371953 | orchestrator | 2026-02-09 02:52:32.371964 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-09 02:52:32.371975 | orchestrator | Monday 09 February 2026 02:52:28 +0000 (0:00:01.063) 0:03:55.118 ******* 2026-02-09 02:52:32.371985 | orchestrator | changed: [testbed-manager] 2026-02-09 02:52:32.371996 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:52:32.372007 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:52:32.372020 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:52:32.372032 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:52:32.372045 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:52:32.372057 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:52:32.372070 | orchestrator | 2026-02-09 02:52:32.372087 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-09 02:52:32.372101 | orchestrator | Monday 09 February 2026 02:52:29 +0000 (0:00:01.075) 0:03:56.194 ******* 2026-02-09 02:52:32.372113 | orchestrator | changed: [testbed-manager] 2026-02-09 02:52:32.372125 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:52:32.372138 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:52:32.372150 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:52:32.372163 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:52:32.372175 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:52:32.372188 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:52:32.372200 | orchestrator | 2026-02-09 02:52:32.372213 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-09 02:52:32.372226 | orchestrator | Monday 09 February 2026 02:52:31 +0000 (0:00:01.128) 0:03:57.323 ******* 2026-02-09 02:52:32.372239 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:52:32.372252 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:52:32.372265 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:52:32.372278 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:52:32.372288 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:52:32.372299 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:52:32.372310 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:52:32.372321 | orchestrator | 2026-02-09 02:52:32.372332 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-09 02:52:32.372343 | orchestrator | Monday 09 February 2026 02:52:31 +0000 (0:00:00.253) 0:03:57.576 ******* 2026-02-09 02:52:32.372354 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:52:32.372365 | orchestrator | ok: [testbed-manager] 2026-02-09 02:52:32.372376 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:52:32.372387 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:52:32.372398 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:52:32.372408 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:52:32.372419 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:52:32.372430 | orchestrator | 2026-02-09 02:52:32.372441 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-09 02:52:32.372452 | orchestrator | Monday 09 February 2026 02:52:31 +0000 (0:00:00.707) 0:03:58.284 ******* 2026-02-09 02:52:32.372464 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:52:32.372484 | orchestrator | 2026-02-09 02:52:32.372496 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-09 02:52:32.372514 | orchestrator | Monday 09 February 2026 02:52:32 +0000 (0:00:00.388) 0:03:58.673 ******* 2026-02-09 02:53:45.856305 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:45.856387 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:53:45.856397 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:53:45.856403 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:53:45.856409 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:53:45.856416 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:53:45.856422 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:53:45.856428 | orchestrator | 2026-02-09 02:53:45.856435 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-09 02:53:45.856442 | orchestrator | Monday 09 February 2026 02:52:39 +0000 (0:00:07.001) 0:04:05.674 ******* 2026-02-09 02:53:45.856448 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:45.856454 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:45.856459 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:45.856465 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:45.856471 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:45.856477 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:45.856483 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:45.856488 | orchestrator | 2026-02-09 02:53:45.856494 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-09 02:53:45.856500 | orchestrator | Monday 09 February 2026 02:52:40 +0000 (0:00:01.060) 0:04:06.735 ******* 2026-02-09 02:53:45.856506 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:45.856512 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:45.856518 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:45.856523 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:45.856529 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:45.856535 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:45.856541 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:45.856547 | orchestrator | 2026-02-09 02:53:45.856552 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-09 02:53:45.856558 | orchestrator | Monday 09 February 2026 02:52:41 +0000 (0:00:01.114) 0:04:07.849 ******* 2026-02-09 02:53:45.856564 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:45.856570 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:45.856576 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:45.856581 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:45.856588 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:45.856594 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:45.856599 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:45.856605 | orchestrator | 2026-02-09 02:53:45.856611 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-09 02:53:45.856618 | orchestrator | Monday 09 February 2026 02:52:41 +0000 (0:00:00.317) 0:04:08.167 ******* 2026-02-09 02:53:45.856693 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:45.856699 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:45.856705 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:45.856710 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:45.856716 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:45.856722 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:45.856728 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:45.856734 | orchestrator | 2026-02-09 02:53:45.856740 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-09 02:53:45.856746 | orchestrator | Monday 09 February 2026 02:52:42 +0000 (0:00:00.356) 0:04:08.524 ******* 2026-02-09 02:53:45.856752 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:45.856758 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:45.856763 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:45.856784 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:45.856790 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:45.856796 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:45.856801 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:45.856807 | orchestrator | 2026-02-09 02:53:45.856813 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-09 02:53:45.856819 | orchestrator | Monday 09 February 2026 02:52:42 +0000 (0:00:00.358) 0:04:08.882 ******* 2026-02-09 02:53:45.856825 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:45.856831 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:45.856837 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:45.856843 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:45.856849 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:45.856855 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:45.856860 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:45.856866 | orchestrator | 2026-02-09 02:53:45.856872 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-09 02:53:45.856878 | orchestrator | Monday 09 February 2026 02:52:48 +0000 (0:00:05.645) 0:04:14.528 ******* 2026-02-09 02:53:45.856886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:53:45.856894 | orchestrator | 2026-02-09 02:53:45.856900 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-09 02:53:45.856906 | orchestrator | Monday 09 February 2026 02:52:48 +0000 (0:00:00.384) 0:04:14.912 ******* 2026-02-09 02:53:45.856912 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-09 02:53:45.856920 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-09 02:53:45.856930 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-09 02:53:45.856939 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-09 02:53:45.856948 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:53:45.856967 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-09 02:53:45.856977 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-09 02:53:45.856985 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:53:45.856994 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-09 02:53:45.857003 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-09 02:53:45.857011 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:53:45.857020 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-09 02:53:45.857029 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-09 02:53:45.857038 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:53:45.857048 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-09 02:53:45.857057 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:53:45.857083 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-09 02:53:45.857093 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:53:45.857103 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-09 02:53:45.857110 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-09 02:53:45.857116 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:53:45.857122 | orchestrator | 2026-02-09 02:53:45.857127 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-09 02:53:45.857133 | orchestrator | Monday 09 February 2026 02:52:48 +0000 (0:00:00.396) 0:04:15.309 ******* 2026-02-09 02:53:45.857140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:53:45.857146 | orchestrator | 2026-02-09 02:53:45.857152 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-09 02:53:45.857163 | orchestrator | Monday 09 February 2026 02:52:49 +0000 (0:00:00.454) 0:04:15.763 ******* 2026-02-09 02:53:45.857169 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-09 02:53:45.857175 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-09 02:53:45.857181 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:53:45.857187 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:53:45.857192 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-09 02:53:45.857198 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-09 02:53:45.857204 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:53:45.857209 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:53:45.857215 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-09 02:53:45.857221 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-09 02:53:45.857226 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:53:45.857232 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:53:45.857238 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-09 02:53:45.857243 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:53:45.857249 | orchestrator | 2026-02-09 02:53:45.857255 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-09 02:53:45.857261 | orchestrator | Monday 09 February 2026 02:52:49 +0000 (0:00:00.347) 0:04:16.110 ******* 2026-02-09 02:53:45.857267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:53:45.857273 | orchestrator | 2026-02-09 02:53:45.857279 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-09 02:53:45.857284 | orchestrator | Monday 09 February 2026 02:52:50 +0000 (0:00:00.474) 0:04:16.585 ******* 2026-02-09 02:53:45.857290 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:53:45.857296 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:53:45.857302 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:53:45.857307 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:53:45.857316 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:53:45.857322 | orchestrator | changed: [testbed-manager] 2026-02-09 02:53:45.857328 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:53:45.857334 | orchestrator | 2026-02-09 02:53:45.857339 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-09 02:53:45.857345 | orchestrator | Monday 09 February 2026 02:53:23 +0000 (0:00:32.969) 0:04:49.554 ******* 2026-02-09 02:53:45.857351 | orchestrator | changed: [testbed-manager] 2026-02-09 02:53:45.857357 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:53:45.857362 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:53:45.857368 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:53:45.857374 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:53:45.857380 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:53:45.857385 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:53:45.857391 | orchestrator | 2026-02-09 02:53:45.857397 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-09 02:53:45.857403 | orchestrator | Monday 09 February 2026 02:53:31 +0000 (0:00:07.804) 0:04:57.359 ******* 2026-02-09 02:53:45.857408 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:53:45.857414 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:53:45.857420 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:53:45.857426 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:53:45.857431 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:53:45.857437 | orchestrator | changed: [testbed-manager] 2026-02-09 02:53:45.857442 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:53:45.857448 | orchestrator | 2026-02-09 02:53:45.857454 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-09 02:53:45.857463 | orchestrator | Monday 09 February 2026 02:53:38 +0000 (0:00:07.317) 0:05:04.677 ******* 2026-02-09 02:53:45.857469 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:45.857475 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:45.857480 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:45.857486 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:45.857492 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:45.857498 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:45.857503 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:45.857509 | orchestrator | 2026-02-09 02:53:45.857515 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-09 02:53:45.857521 | orchestrator | Monday 09 February 2026 02:53:40 +0000 (0:00:01.660) 0:05:06.337 ******* 2026-02-09 02:53:45.857526 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:53:45.857532 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:53:45.857538 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:53:45.857544 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:53:45.857549 | orchestrator | changed: [testbed-manager] 2026-02-09 02:53:45.857555 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:53:45.857561 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:53:45.857567 | orchestrator | 2026-02-09 02:53:45.857576 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-09 02:53:57.057843 | orchestrator | Monday 09 February 2026 02:53:45 +0000 (0:00:05.812) 0:05:12.150 ******* 2026-02-09 02:53:57.057938 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:53:57.057953 | orchestrator | 2026-02-09 02:53:57.057961 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-09 02:53:57.057968 | orchestrator | Monday 09 February 2026 02:53:46 +0000 (0:00:00.480) 0:05:12.630 ******* 2026-02-09 02:53:57.057975 | orchestrator | changed: [testbed-manager] 2026-02-09 02:53:57.057983 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:53:57.057989 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:53:57.057995 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:53:57.058001 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:53:57.058007 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:53:57.058062 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:53:57.058069 | orchestrator | 2026-02-09 02:53:57.058075 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-09 02:53:57.058082 | orchestrator | Monday 09 February 2026 02:53:47 +0000 (0:00:00.731) 0:05:13.361 ******* 2026-02-09 02:53:57.058088 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:57.058096 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:57.058102 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:57.058109 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:57.058114 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:57.058118 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:57.058122 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:57.058126 | orchestrator | 2026-02-09 02:53:57.058130 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-09 02:53:57.058134 | orchestrator | Monday 09 February 2026 02:53:48 +0000 (0:00:01.693) 0:05:15.055 ******* 2026-02-09 02:53:57.058138 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:53:57.058142 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:53:57.058146 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:53:57.058150 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:53:57.058154 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:53:57.058158 | orchestrator | changed: [testbed-manager] 2026-02-09 02:53:57.058162 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:53:57.058166 | orchestrator | 2026-02-09 02:53:57.058170 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-09 02:53:57.058174 | orchestrator | Monday 09 February 2026 02:53:49 +0000 (0:00:00.749) 0:05:15.804 ******* 2026-02-09 02:53:57.058194 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:53:57.058200 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:53:57.058206 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:53:57.058212 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:53:57.058218 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:53:57.058224 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:53:57.058230 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:53:57.058236 | orchestrator | 2026-02-09 02:53:57.058242 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-09 02:53:57.058249 | orchestrator | Monday 09 February 2026 02:53:49 +0000 (0:00:00.290) 0:05:16.095 ******* 2026-02-09 02:53:57.058255 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:53:57.058261 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:53:57.058268 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:53:57.058288 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:53:57.058294 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:53:57.058301 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:53:57.058307 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:53:57.058314 | orchestrator | 2026-02-09 02:53:57.058320 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-09 02:53:57.058327 | orchestrator | Monday 09 February 2026 02:53:50 +0000 (0:00:00.405) 0:05:16.500 ******* 2026-02-09 02:53:57.058333 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:57.058339 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:57.058345 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:57.058351 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:57.058357 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:57.058364 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:57.058370 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:57.058376 | orchestrator | 2026-02-09 02:53:57.058383 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-09 02:53:57.058389 | orchestrator | Monday 09 February 2026 02:53:50 +0000 (0:00:00.332) 0:05:16.833 ******* 2026-02-09 02:53:57.058395 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:53:57.058401 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:53:57.058407 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:53:57.058413 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:53:57.058419 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:53:57.058426 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:53:57.058433 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:53:57.058439 | orchestrator | 2026-02-09 02:53:57.058446 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-09 02:53:57.058453 | orchestrator | Monday 09 February 2026 02:53:50 +0000 (0:00:00.282) 0:05:17.115 ******* 2026-02-09 02:53:57.058460 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:57.058467 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:57.058473 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:57.058478 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:57.058484 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:57.058489 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:57.058494 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:57.058500 | orchestrator | 2026-02-09 02:53:57.058508 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-09 02:53:57.058514 | orchestrator | Monday 09 February 2026 02:53:51 +0000 (0:00:00.358) 0:05:17.473 ******* 2026-02-09 02:53:57.058521 | orchestrator | ok: [testbed-manager] =>  2026-02-09 02:53:57.058527 | orchestrator |  docker_version: 5:27.5.1 2026-02-09 02:53:57.058534 | orchestrator | ok: [testbed-node-3] =>  2026-02-09 02:53:57.058541 | orchestrator |  docker_version: 5:27.5.1 2026-02-09 02:53:57.058548 | orchestrator | ok: [testbed-node-4] =>  2026-02-09 02:53:57.058554 | orchestrator |  docker_version: 5:27.5.1 2026-02-09 02:53:57.058561 | orchestrator | ok: [testbed-node-5] =>  2026-02-09 02:53:57.058567 | orchestrator |  docker_version: 5:27.5.1 2026-02-09 02:53:57.058591 | orchestrator | ok: [testbed-node-0] =>  2026-02-09 02:53:57.058605 | orchestrator |  docker_version: 5:27.5.1 2026-02-09 02:53:57.058611 | orchestrator | ok: [testbed-node-1] =>  2026-02-09 02:53:57.058725 | orchestrator |  docker_version: 5:27.5.1 2026-02-09 02:53:57.058734 | orchestrator | ok: [testbed-node-2] =>  2026-02-09 02:53:57.058740 | orchestrator |  docker_version: 5:27.5.1 2026-02-09 02:53:57.058747 | orchestrator | 2026-02-09 02:53:57.058753 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-09 02:53:57.058761 | orchestrator | Monday 09 February 2026 02:53:51 +0000 (0:00:00.278) 0:05:17.752 ******* 2026-02-09 02:53:57.058768 | orchestrator | ok: [testbed-manager] =>  2026-02-09 02:53:57.058775 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-09 02:53:57.058781 | orchestrator | ok: [testbed-node-3] =>  2026-02-09 02:53:57.058788 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-09 02:53:57.058794 | orchestrator | ok: [testbed-node-4] =>  2026-02-09 02:53:57.058801 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-09 02:53:57.058808 | orchestrator | ok: [testbed-node-5] =>  2026-02-09 02:53:57.058814 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-09 02:53:57.058821 | orchestrator | ok: [testbed-node-0] =>  2026-02-09 02:53:57.058827 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-09 02:53:57.058834 | orchestrator | ok: [testbed-node-1] =>  2026-02-09 02:53:57.058841 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-09 02:53:57.058848 | orchestrator | ok: [testbed-node-2] =>  2026-02-09 02:53:57.058855 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-09 02:53:57.058861 | orchestrator | 2026-02-09 02:53:57.058868 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-09 02:53:57.058875 | orchestrator | Monday 09 February 2026 02:53:51 +0000 (0:00:00.305) 0:05:18.057 ******* 2026-02-09 02:53:57.058881 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:53:57.058888 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:53:57.058895 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:53:57.058901 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:53:57.058908 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:53:57.058915 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:53:57.058922 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:53:57.058928 | orchestrator | 2026-02-09 02:53:57.058934 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-09 02:53:57.058941 | orchestrator | Monday 09 February 2026 02:53:52 +0000 (0:00:00.311) 0:05:18.369 ******* 2026-02-09 02:53:57.058947 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:53:57.058951 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:53:57.058954 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:53:57.058958 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:53:57.058962 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:53:57.058965 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:53:57.058969 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:53:57.058973 | orchestrator | 2026-02-09 02:53:57.058976 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-09 02:53:57.058980 | orchestrator | Monday 09 February 2026 02:53:52 +0000 (0:00:00.280) 0:05:18.650 ******* 2026-02-09 02:53:57.058985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:53:57.058990 | orchestrator | 2026-02-09 02:53:57.059001 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-09 02:53:57.059005 | orchestrator | Monday 09 February 2026 02:53:52 +0000 (0:00:00.496) 0:05:19.147 ******* 2026-02-09 02:53:57.059009 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:57.059013 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:57.059016 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:57.059020 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:57.059024 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:57.059034 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:57.059037 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:57.059041 | orchestrator | 2026-02-09 02:53:57.059045 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-09 02:53:57.059049 | orchestrator | Monday 09 February 2026 02:53:53 +0000 (0:00:00.977) 0:05:20.124 ******* 2026-02-09 02:53:57.059053 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:53:57.059056 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:53:57.059060 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:53:57.059064 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:53:57.059067 | orchestrator | ok: [testbed-manager] 2026-02-09 02:53:57.059071 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:53:57.059075 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:53:57.059078 | orchestrator | 2026-02-09 02:53:57.059082 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-09 02:53:57.059087 | orchestrator | Monday 09 February 2026 02:53:56 +0000 (0:00:02.824) 0:05:22.949 ******* 2026-02-09 02:53:57.059091 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-09 02:53:57.059095 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-09 02:53:57.059099 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-09 02:53:57.059103 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:53:57.059107 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-09 02:53:57.059111 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-09 02:53:57.059114 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-09 02:53:57.059118 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-09 02:53:57.059122 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-09 02:53:57.059125 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-09 02:53:57.059129 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:53:57.059133 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-09 02:53:57.059136 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-09 02:53:57.059140 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-09 02:53:57.059144 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:53:57.059147 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-09 02:53:57.059159 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-09 02:54:54.011446 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-09 02:54:54.011560 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:54:54.011577 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-09 02:54:54.011589 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-09 02:54:54.011655 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-09 02:54:54.011668 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:54:54.011679 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:54:54.011690 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-09 02:54:54.011701 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-09 02:54:54.011712 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-09 02:54:54.011723 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:54:54.011734 | orchestrator | 2026-02-09 02:54:54.011746 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-09 02:54:54.011759 | orchestrator | Monday 09 February 2026 02:53:57 +0000 (0:00:00.621) 0:05:23.570 ******* 2026-02-09 02:54:54.011770 | orchestrator | ok: [testbed-manager] 2026-02-09 02:54:54.011781 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:54:54.011792 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:54:54.011803 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:54:54.011815 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:54:54.011826 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:54:54.011860 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:54:54.011872 | orchestrator | 2026-02-09 02:54:54.011883 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-09 02:54:54.011893 | orchestrator | Monday 09 February 2026 02:54:03 +0000 (0:00:06.326) 0:05:29.897 ******* 2026-02-09 02:54:54.011904 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:54:54.011915 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:54:54.011926 | orchestrator | ok: [testbed-manager] 2026-02-09 02:54:54.011937 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:54:54.011947 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:54:54.011958 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:54:54.011969 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:54:54.011982 | orchestrator | 2026-02-09 02:54:54.011997 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-09 02:54:54.012010 | orchestrator | Monday 09 February 2026 02:54:04 +0000 (0:00:01.030) 0:05:30.927 ******* 2026-02-09 02:54:54.012023 | orchestrator | ok: [testbed-manager] 2026-02-09 02:54:54.012036 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:54:54.012048 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:54:54.012061 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:54:54.012075 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:54:54.012088 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:54:54.012101 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:54:54.012113 | orchestrator | 2026-02-09 02:54:54.012126 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-09 02:54:54.012140 | orchestrator | Monday 09 February 2026 02:54:11 +0000 (0:00:07.383) 0:05:38.311 ******* 2026-02-09 02:54:54.012153 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:54:54.012167 | orchestrator | changed: [testbed-manager] 2026-02-09 02:54:54.012179 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:54:54.012197 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:54:54.012216 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:54:54.012237 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:54:54.012257 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:54:54.012276 | orchestrator | 2026-02-09 02:54:54.012294 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-09 02:54:54.012312 | orchestrator | Monday 09 February 2026 02:54:15 +0000 (0:00:03.385) 0:05:41.696 ******* 2026-02-09 02:54:54.012331 | orchestrator | ok: [testbed-manager] 2026-02-09 02:54:54.012348 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:54:54.012367 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:54:54.012387 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:54:54.012406 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:54:54.012424 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:54:54.012443 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:54:54.012462 | orchestrator | 2026-02-09 02:54:54.012483 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-09 02:54:54.012495 | orchestrator | Monday 09 February 2026 02:54:16 +0000 (0:00:01.280) 0:05:42.977 ******* 2026-02-09 02:54:54.012506 | orchestrator | ok: [testbed-manager] 2026-02-09 02:54:54.012517 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:54:54.012527 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:54:54.012538 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:54:54.012549 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:54:54.012560 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:54:54.012571 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:54:54.012582 | orchestrator | 2026-02-09 02:54:54.012592 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-09 02:54:54.012635 | orchestrator | Monday 09 February 2026 02:54:18 +0000 (0:00:01.553) 0:05:44.531 ******* 2026-02-09 02:54:54.012647 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:54:54.012658 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:54:54.012668 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:54:54.012679 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:54:54.012708 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:54:54.012728 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:54:54.012747 | orchestrator | changed: [testbed-manager] 2026-02-09 02:54:54.012780 | orchestrator | 2026-02-09 02:54:54.012812 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-09 02:54:54.012829 | orchestrator | Monday 09 February 2026 02:54:18 +0000 (0:00:00.618) 0:05:45.150 ******* 2026-02-09 02:54:54.012846 | orchestrator | ok: [testbed-manager] 2026-02-09 02:54:54.012863 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:54:54.012880 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:54:54.012897 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:54:54.012914 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:54:54.012932 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:54:54.012951 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:54:54.012969 | orchestrator | 2026-02-09 02:54:54.012988 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-09 02:54:54.013034 | orchestrator | Monday 09 February 2026 02:54:27 +0000 (0:00:09.140) 0:05:54.290 ******* 2026-02-09 02:54:54.013056 | orchestrator | changed: [testbed-manager] 2026-02-09 02:54:54.013075 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:54:54.013093 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:54:54.013111 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:54:54.013126 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:54:54.013137 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:54:54.013148 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:54:54.013159 | orchestrator | 2026-02-09 02:54:54.013170 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-09 02:54:54.013181 | orchestrator | Monday 09 February 2026 02:54:28 +0000 (0:00:00.884) 0:05:55.175 ******* 2026-02-09 02:54:54.013192 | orchestrator | ok: [testbed-manager] 2026-02-09 02:54:54.013203 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:54:54.013213 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:54:54.013224 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:54:54.013235 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:54:54.013245 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:54:54.013256 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:54:54.013267 | orchestrator | 2026-02-09 02:54:54.013277 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-09 02:54:54.013288 | orchestrator | Monday 09 February 2026 02:54:37 +0000 (0:00:08.468) 0:06:03.644 ******* 2026-02-09 02:54:54.013299 | orchestrator | ok: [testbed-manager] 2026-02-09 02:54:54.013310 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:54:54.013320 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:54:54.013331 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:54:54.013342 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:54:54.013352 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:54:54.013363 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:54:54.013373 | orchestrator | 2026-02-09 02:54:54.013384 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-09 02:54:54.013395 | orchestrator | Monday 09 February 2026 02:54:47 +0000 (0:00:10.659) 0:06:14.303 ******* 2026-02-09 02:54:54.013407 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-09 02:54:54.013426 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-09 02:54:54.013444 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-09 02:54:54.013462 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-09 02:54:54.013480 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-09 02:54:54.013498 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-09 02:54:54.013514 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-09 02:54:54.013531 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-09 02:54:54.013547 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-09 02:54:54.013580 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-09 02:54:54.013626 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-09 02:54:54.013710 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-09 02:54:54.013730 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-09 02:54:54.013741 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-09 02:54:54.013752 | orchestrator | 2026-02-09 02:54:54.013763 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-09 02:54:54.013774 | orchestrator | Monday 09 February 2026 02:54:49 +0000 (0:00:01.212) 0:06:15.516 ******* 2026-02-09 02:54:54.013790 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:54:54.013801 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:54:54.013812 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:54:54.013823 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:54:54.013834 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:54:54.013845 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:54:54.013855 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:54:54.013866 | orchestrator | 2026-02-09 02:54:54.013877 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-09 02:54:54.013888 | orchestrator | Monday 09 February 2026 02:54:49 +0000 (0:00:00.548) 0:06:16.064 ******* 2026-02-09 02:54:54.013899 | orchestrator | ok: [testbed-manager] 2026-02-09 02:54:54.013910 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:54:54.013921 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:54:54.013932 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:54:54.013943 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:54:54.013954 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:54:54.013964 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:54:54.013975 | orchestrator | 2026-02-09 02:54:54.013987 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-09 02:54:54.013999 | orchestrator | Monday 09 February 2026 02:54:52 +0000 (0:00:03.219) 0:06:19.284 ******* 2026-02-09 02:54:54.014010 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:54:54.014084 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:54:54.014096 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:54:54.014106 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:54:54.014117 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:54:54.014128 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:54:54.014139 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:54:54.014150 | orchestrator | 2026-02-09 02:54:54.014162 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-09 02:54:54.014174 | orchestrator | Monday 09 February 2026 02:54:53 +0000 (0:00:00.548) 0:06:19.832 ******* 2026-02-09 02:54:54.014185 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-09 02:54:54.014196 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-09 02:54:54.014207 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:54:54.014218 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-09 02:54:54.014229 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-09 02:54:54.014240 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:54:54.014251 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-09 02:54:54.014262 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-09 02:54:54.014273 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:54:54.014297 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-09 02:55:13.689696 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-09 02:55:13.689898 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:55:13.689924 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-09 02:55:13.689936 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-09 02:55:13.689947 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:55:13.689986 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-09 02:55:13.689998 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-09 02:55:13.690009 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:55:13.690079 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-09 02:55:13.690093 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-09 02:55:13.690141 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:55:13.690156 | orchestrator | 2026-02-09 02:55:13.690171 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-09 02:55:13.690186 | orchestrator | Monday 09 February 2026 02:54:54 +0000 (0:00:00.764) 0:06:20.597 ******* 2026-02-09 02:55:13.690199 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:55:13.690212 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:55:13.690224 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:55:13.690238 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:55:13.690251 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:55:13.690265 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:55:13.690278 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:55:13.690290 | orchestrator | 2026-02-09 02:55:13.690304 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-09 02:55:13.690317 | orchestrator | Monday 09 February 2026 02:54:54 +0000 (0:00:00.500) 0:06:21.097 ******* 2026-02-09 02:55:13.690330 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:55:13.690343 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:55:13.690356 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:55:13.690368 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:55:13.690381 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:55:13.690394 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:55:13.690406 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:55:13.690419 | orchestrator | 2026-02-09 02:55:13.690431 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-09 02:55:13.690444 | orchestrator | Monday 09 February 2026 02:54:55 +0000 (0:00:00.540) 0:06:21.638 ******* 2026-02-09 02:55:13.690457 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:55:13.690470 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:55:13.690483 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:55:13.690494 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:55:13.690505 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:55:13.690515 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:55:13.690526 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:55:13.690537 | orchestrator | 2026-02-09 02:55:13.690548 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-09 02:55:13.690560 | orchestrator | Monday 09 February 2026 02:54:55 +0000 (0:00:00.544) 0:06:22.182 ******* 2026-02-09 02:55:13.690571 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:13.690582 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:55:13.690617 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:55:13.690629 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:55:13.690640 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:55:13.690651 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:55:13.690662 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:55:13.690673 | orchestrator | 2026-02-09 02:55:13.690685 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-09 02:55:13.690696 | orchestrator | Monday 09 February 2026 02:54:57 +0000 (0:00:01.666) 0:06:23.848 ******* 2026-02-09 02:55:13.690709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:55:13.690723 | orchestrator | 2026-02-09 02:55:13.690734 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-09 02:55:13.690746 | orchestrator | Monday 09 February 2026 02:54:58 +0000 (0:00:00.903) 0:06:24.752 ******* 2026-02-09 02:55:13.690773 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:13.690785 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:55:13.690796 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:55:13.690807 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:55:13.690818 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:55:13.690829 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:55:13.690840 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:55:13.690851 | orchestrator | 2026-02-09 02:55:13.690862 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-09 02:55:13.690873 | orchestrator | Monday 09 February 2026 02:54:59 +0000 (0:00:00.851) 0:06:25.603 ******* 2026-02-09 02:55:13.690884 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:13.690895 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:55:13.690906 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:55:13.690917 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:55:13.690928 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:55:13.690939 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:55:13.690950 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:55:13.690961 | orchestrator | 2026-02-09 02:55:13.690972 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-09 02:55:13.690983 | orchestrator | Monday 09 February 2026 02:55:00 +0000 (0:00:00.846) 0:06:26.450 ******* 2026-02-09 02:55:13.690994 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:13.691005 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:55:13.691016 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:55:13.691027 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:55:13.691038 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:55:13.691049 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:55:13.691060 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:55:13.691071 | orchestrator | 2026-02-09 02:55:13.691082 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-09 02:55:13.691114 | orchestrator | Monday 09 February 2026 02:55:01 +0000 (0:00:01.519) 0:06:27.970 ******* 2026-02-09 02:55:13.691126 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:55:13.691137 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:55:13.691148 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:55:13.691160 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:55:13.691171 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:55:13.691181 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:55:13.691193 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:55:13.691203 | orchestrator | 2026-02-09 02:55:13.691215 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-09 02:55:13.691226 | orchestrator | Monday 09 February 2026 02:55:03 +0000 (0:00:01.382) 0:06:29.352 ******* 2026-02-09 02:55:13.691237 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:13.691248 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:55:13.691259 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:55:13.691270 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:55:13.691281 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:55:13.691292 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:55:13.691303 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:55:13.691314 | orchestrator | 2026-02-09 02:55:13.691325 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-09 02:55:13.691336 | orchestrator | Monday 09 February 2026 02:55:04 +0000 (0:00:01.305) 0:06:30.658 ******* 2026-02-09 02:55:13.691347 | orchestrator | changed: [testbed-manager] 2026-02-09 02:55:13.691358 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:55:13.691369 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:55:13.691381 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:55:13.691391 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:55:13.691402 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:55:13.691413 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:55:13.691424 | orchestrator | 2026-02-09 02:55:13.691444 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-09 02:55:13.691455 | orchestrator | Monday 09 February 2026 02:55:05 +0000 (0:00:01.452) 0:06:32.111 ******* 2026-02-09 02:55:13.691467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:55:13.691478 | orchestrator | 2026-02-09 02:55:13.691490 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-09 02:55:13.691501 | orchestrator | Monday 09 February 2026 02:55:06 +0000 (0:00:01.078) 0:06:33.190 ******* 2026-02-09 02:55:13.691512 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:55:13.691523 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:13.691534 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:55:13.691545 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:55:13.691556 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:55:13.691567 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:55:13.691578 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:55:13.691589 | orchestrator | 2026-02-09 02:55:13.691619 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-09 02:55:13.691630 | orchestrator | Monday 09 February 2026 02:55:08 +0000 (0:00:01.343) 0:06:34.533 ******* 2026-02-09 02:55:13.691642 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:13.691653 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:55:13.691664 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:55:13.691675 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:55:13.691686 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:55:13.691712 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:55:13.691723 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:55:13.691734 | orchestrator | 2026-02-09 02:55:13.691746 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-09 02:55:13.691757 | orchestrator | Monday 09 February 2026 02:55:09 +0000 (0:00:01.098) 0:06:35.631 ******* 2026-02-09 02:55:13.691768 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:13.691779 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:55:13.691790 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:55:13.691801 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:55:13.691812 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:55:13.691823 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:55:13.691834 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:55:13.691845 | orchestrator | 2026-02-09 02:55:13.691856 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-09 02:55:13.691868 | orchestrator | Monday 09 February 2026 02:55:11 +0000 (0:00:01.784) 0:06:37.416 ******* 2026-02-09 02:55:13.691879 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:13.691890 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:55:13.691901 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:55:13.691911 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:55:13.691922 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:55:13.691933 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:55:13.691944 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:55:13.691955 | orchestrator | 2026-02-09 02:55:13.691967 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-09 02:55:13.691978 | orchestrator | Monday 09 February 2026 02:55:12 +0000 (0:00:01.332) 0:06:38.749 ******* 2026-02-09 02:55:13.691989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:55:13.692001 | orchestrator | 2026-02-09 02:55:13.692012 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-09 02:55:13.692023 | orchestrator | Monday 09 February 2026 02:55:13 +0000 (0:00:00.927) 0:06:39.677 ******* 2026-02-09 02:55:13.692034 | orchestrator | 2026-02-09 02:55:13.692045 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-09 02:55:13.692063 | orchestrator | Monday 09 February 2026 02:55:13 +0000 (0:00:00.045) 0:06:39.722 ******* 2026-02-09 02:55:13.692074 | orchestrator | 2026-02-09 02:55:13.692086 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-09 02:55:13.692097 | orchestrator | Monday 09 February 2026 02:55:13 +0000 (0:00:00.049) 0:06:39.771 ******* 2026-02-09 02:55:13.692108 | orchestrator | 2026-02-09 02:55:13.692119 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-09 02:55:13.692137 | orchestrator | Monday 09 February 2026 02:55:13 +0000 (0:00:00.040) 0:06:39.811 ******* 2026-02-09 02:55:38.986278 | orchestrator | 2026-02-09 02:55:38.986386 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-09 02:55:38.986403 | orchestrator | Monday 09 February 2026 02:55:13 +0000 (0:00:00.039) 0:06:39.851 ******* 2026-02-09 02:55:38.986414 | orchestrator | 2026-02-09 02:55:38.986424 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-09 02:55:38.986434 | orchestrator | Monday 09 February 2026 02:55:13 +0000 (0:00:00.047) 0:06:39.898 ******* 2026-02-09 02:55:38.986444 | orchestrator | 2026-02-09 02:55:38.986455 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-09 02:55:38.986465 | orchestrator | Monday 09 February 2026 02:55:13 +0000 (0:00:00.039) 0:06:39.938 ******* 2026-02-09 02:55:38.986476 | orchestrator | 2026-02-09 02:55:38.986486 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-09 02:55:38.986497 | orchestrator | Monday 09 February 2026 02:55:13 +0000 (0:00:00.039) 0:06:39.978 ******* 2026-02-09 02:55:38.986508 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:55:38.986522 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:55:38.986534 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:55:38.986544 | orchestrator | 2026-02-09 02:55:38.986556 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-09 02:55:38.986567 | orchestrator | Monday 09 February 2026 02:55:14 +0000 (0:00:01.111) 0:06:41.090 ******* 2026-02-09 02:55:38.986578 | orchestrator | changed: [testbed-manager] 2026-02-09 02:55:38.986651 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:55:38.986661 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:55:38.986668 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:55:38.986674 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:55:38.986681 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:55:38.986687 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:55:38.986693 | orchestrator | 2026-02-09 02:55:38.986700 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-09 02:55:38.986707 | orchestrator | Monday 09 February 2026 02:55:16 +0000 (0:00:01.450) 0:06:42.540 ******* 2026-02-09 02:55:38.986713 | orchestrator | changed: [testbed-manager] 2026-02-09 02:55:38.986719 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:55:38.986726 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:55:38.986732 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:55:38.986738 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:55:38.986744 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:55:38.986751 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:55:38.986757 | orchestrator | 2026-02-09 02:55:38.986763 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-09 02:55:38.986770 | orchestrator | Monday 09 February 2026 02:55:17 +0000 (0:00:01.160) 0:06:43.700 ******* 2026-02-09 02:55:38.986776 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:55:38.986782 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:55:38.986788 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:55:38.986794 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:55:38.986800 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:55:38.986806 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:55:38.986813 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:55:38.986819 | orchestrator | 2026-02-09 02:55:38.986825 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-09 02:55:38.986831 | orchestrator | Monday 09 February 2026 02:55:19 +0000 (0:00:02.355) 0:06:46.056 ******* 2026-02-09 02:55:38.986858 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:55:38.986876 | orchestrator | 2026-02-09 02:55:38.986885 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-09 02:55:38.986892 | orchestrator | Monday 09 February 2026 02:55:19 +0000 (0:00:00.099) 0:06:46.156 ******* 2026-02-09 02:55:38.986899 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:38.986907 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:55:38.986914 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:55:38.986921 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:55:38.986928 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:55:38.986935 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:55:38.986942 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:55:38.986949 | orchestrator | 2026-02-09 02:55:38.986957 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-09 02:55:38.986965 | orchestrator | Monday 09 February 2026 02:55:20 +0000 (0:00:00.971) 0:06:47.128 ******* 2026-02-09 02:55:38.986972 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:55:38.986979 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:55:38.986986 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:55:38.986993 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:55:38.987000 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:55:38.987007 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:55:38.987015 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:55:38.987022 | orchestrator | 2026-02-09 02:55:38.987029 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-09 02:55:38.987036 | orchestrator | Monday 09 February 2026 02:55:21 +0000 (0:00:00.570) 0:06:47.698 ******* 2026-02-09 02:55:38.987045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:55:38.987054 | orchestrator | 2026-02-09 02:55:38.987061 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-09 02:55:38.987069 | orchestrator | Monday 09 February 2026 02:55:22 +0000 (0:00:01.196) 0:06:48.895 ******* 2026-02-09 02:55:38.987076 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:38.987083 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:55:38.987090 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:55:38.987097 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:55:38.987105 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:55:38.987111 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:55:38.987119 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:55:38.987126 | orchestrator | 2026-02-09 02:55:38.987134 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-09 02:55:38.987141 | orchestrator | Monday 09 February 2026 02:55:23 +0000 (0:00:00.789) 0:06:49.684 ******* 2026-02-09 02:55:38.987148 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-09 02:55:38.987172 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-09 02:55:38.987180 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-09 02:55:38.987188 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-09 02:55:38.987195 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-09 02:55:38.987202 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-09 02:55:38.987209 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-09 02:55:38.987216 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-09 02:55:38.987224 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-09 02:55:38.987231 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-09 02:55:38.987238 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-09 02:55:38.987245 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-09 02:55:38.987258 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-09 02:55:38.987265 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-09 02:55:38.987272 | orchestrator | 2026-02-09 02:55:38.987280 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-09 02:55:38.987287 | orchestrator | Monday 09 February 2026 02:55:25 +0000 (0:00:02.316) 0:06:52.001 ******* 2026-02-09 02:55:38.987294 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:55:38.987301 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:55:38.987308 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:55:38.987315 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:55:38.987323 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:55:38.987330 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:55:38.987337 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:55:38.987344 | orchestrator | 2026-02-09 02:55:38.987351 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-09 02:55:38.987358 | orchestrator | Monday 09 February 2026 02:55:26 +0000 (0:00:00.750) 0:06:52.752 ******* 2026-02-09 02:55:38.987367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:55:38.987376 | orchestrator | 2026-02-09 02:55:38.987383 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-09 02:55:38.987390 | orchestrator | Monday 09 February 2026 02:55:27 +0000 (0:00:00.844) 0:06:53.596 ******* 2026-02-09 02:55:38.987397 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:38.987404 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:55:38.987412 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:55:38.987419 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:55:38.987426 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:55:38.987433 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:55:38.987440 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:55:38.987447 | orchestrator | 2026-02-09 02:55:38.987454 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-09 02:55:38.987462 | orchestrator | Monday 09 February 2026 02:55:28 +0000 (0:00:00.870) 0:06:54.467 ******* 2026-02-09 02:55:38.987472 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:38.987480 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:55:38.987487 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:55:38.987494 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:55:38.987501 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:55:38.987508 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:55:38.987515 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:55:38.987522 | orchestrator | 2026-02-09 02:55:38.987530 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-09 02:55:38.987537 | orchestrator | Monday 09 February 2026 02:55:29 +0000 (0:00:01.031) 0:06:55.498 ******* 2026-02-09 02:55:38.987544 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:55:38.987551 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:55:38.987558 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:55:38.987565 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:55:38.987573 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:55:38.987579 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:55:38.987587 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:55:38.987616 | orchestrator | 2026-02-09 02:55:38.987628 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-09 02:55:38.987635 | orchestrator | Monday 09 February 2026 02:55:29 +0000 (0:00:00.503) 0:06:56.001 ******* 2026-02-09 02:55:38.987642 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:38.987650 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:55:38.987657 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:55:38.987664 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:55:38.987671 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:55:38.987685 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:55:38.987692 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:55:38.987699 | orchestrator | 2026-02-09 02:55:38.987707 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-09 02:55:38.987714 | orchestrator | Monday 09 February 2026 02:55:31 +0000 (0:00:01.526) 0:06:57.528 ******* 2026-02-09 02:55:38.987721 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:55:38.987728 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:55:38.987735 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:55:38.987742 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:55:38.987749 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:55:38.987756 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:55:38.987763 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:55:38.987771 | orchestrator | 2026-02-09 02:55:38.987778 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-09 02:55:38.987785 | orchestrator | Monday 09 February 2026 02:55:31 +0000 (0:00:00.507) 0:06:58.035 ******* 2026-02-09 02:55:38.987792 | orchestrator | ok: [testbed-manager] 2026-02-09 02:55:38.987799 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:55:38.987807 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:55:38.987814 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:55:38.987821 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:55:38.987828 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:55:38.987841 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:10.156239 | orchestrator | 2026-02-09 02:56:10.156392 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-09 02:56:10.156410 | orchestrator | Monday 09 February 2026 02:55:38 +0000 (0:00:07.241) 0:07:05.277 ******* 2026-02-09 02:56:10.156422 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:10.156435 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:10.156448 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:10.156459 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:10.156470 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:10.156481 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:10.156492 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:10.156503 | orchestrator | 2026-02-09 02:56:10.156515 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-09 02:56:10.156526 | orchestrator | Monday 09 February 2026 02:55:40 +0000 (0:00:01.551) 0:07:06.829 ******* 2026-02-09 02:56:10.156537 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:10.156548 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:10.156559 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:10.156570 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:10.156580 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:10.156618 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:10.156631 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:10.156642 | orchestrator | 2026-02-09 02:56:10.156653 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-09 02:56:10.156664 | orchestrator | Monday 09 February 2026 02:55:42 +0000 (0:00:01.668) 0:07:08.497 ******* 2026-02-09 02:56:10.156675 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:10.156686 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:10.156696 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:10.156707 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:10.156718 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:10.156731 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:10.156744 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:10.156757 | orchestrator | 2026-02-09 02:56:10.156769 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-09 02:56:10.156782 | orchestrator | Monday 09 February 2026 02:55:43 +0000 (0:00:01.679) 0:07:10.176 ******* 2026-02-09 02:56:10.156795 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:10.156808 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:10.156821 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:10.156865 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:10.156895 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:10.156927 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:10.156946 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:10.156965 | orchestrator | 2026-02-09 02:56:10.156983 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-09 02:56:10.157002 | orchestrator | Monday 09 February 2026 02:55:44 +0000 (0:00:00.816) 0:07:10.993 ******* 2026-02-09 02:56:10.157021 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:56:10.157040 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:56:10.157060 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:56:10.157079 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:56:10.157098 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:56:10.157117 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:56:10.157136 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:56:10.157155 | orchestrator | 2026-02-09 02:56:10.157167 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-09 02:56:10.157179 | orchestrator | Monday 09 February 2026 02:55:45 +0000 (0:00:01.034) 0:07:12.028 ******* 2026-02-09 02:56:10.157189 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:56:10.157200 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:56:10.157211 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:56:10.157222 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:56:10.157233 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:56:10.157244 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:56:10.157254 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:56:10.157265 | orchestrator | 2026-02-09 02:56:10.157276 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-09 02:56:10.157287 | orchestrator | Monday 09 February 2026 02:55:46 +0000 (0:00:00.511) 0:07:12.540 ******* 2026-02-09 02:56:10.157298 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:10.157332 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:10.157344 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:10.157355 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:10.157366 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:10.157376 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:10.157387 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:10.157398 | orchestrator | 2026-02-09 02:56:10.157409 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-09 02:56:10.157420 | orchestrator | Monday 09 February 2026 02:55:46 +0000 (0:00:00.577) 0:07:13.117 ******* 2026-02-09 02:56:10.157431 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:10.157441 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:10.157452 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:10.157463 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:10.157474 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:10.157485 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:10.157495 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:10.157506 | orchestrator | 2026-02-09 02:56:10.157517 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-09 02:56:10.157528 | orchestrator | Monday 09 February 2026 02:55:47 +0000 (0:00:00.525) 0:07:13.643 ******* 2026-02-09 02:56:10.157539 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:10.157550 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:10.157561 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:10.157571 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:10.157582 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:10.157626 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:10.157638 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:10.157649 | orchestrator | 2026-02-09 02:56:10.157660 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-09 02:56:10.157671 | orchestrator | Monday 09 February 2026 02:55:48 +0000 (0:00:00.715) 0:07:14.359 ******* 2026-02-09 02:56:10.157682 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:10.157693 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:10.157715 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:10.157726 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:10.157736 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:10.157747 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:10.157758 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:10.157769 | orchestrator | 2026-02-09 02:56:10.157803 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-09 02:56:10.157815 | orchestrator | Monday 09 February 2026 02:55:53 +0000 (0:00:05.473) 0:07:19.832 ******* 2026-02-09 02:56:10.157826 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:56:10.157837 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:56:10.157848 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:56:10.157859 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:56:10.157869 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:56:10.157880 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:56:10.157891 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:56:10.157901 | orchestrator | 2026-02-09 02:56:10.157912 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-09 02:56:10.157923 | orchestrator | Monday 09 February 2026 02:55:54 +0000 (0:00:00.532) 0:07:20.365 ******* 2026-02-09 02:56:10.157936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:56:10.157950 | orchestrator | 2026-02-09 02:56:10.157961 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-09 02:56:10.157972 | orchestrator | Monday 09 February 2026 02:55:55 +0000 (0:00:01.009) 0:07:21.374 ******* 2026-02-09 02:56:10.157983 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:10.157994 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:10.158004 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:10.158092 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:10.158119 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:10.158139 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:10.158158 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:10.158176 | orchestrator | 2026-02-09 02:56:10.158197 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-09 02:56:10.158215 | orchestrator | Monday 09 February 2026 02:55:56 +0000 (0:00:01.831) 0:07:23.206 ******* 2026-02-09 02:56:10.158232 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:10.158244 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:10.158255 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:10.158266 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:10.158276 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:10.158288 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:10.158314 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:10.158325 | orchestrator | 2026-02-09 02:56:10.158337 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-09 02:56:10.158358 | orchestrator | Monday 09 February 2026 02:55:57 +0000 (0:00:01.100) 0:07:24.306 ******* 2026-02-09 02:56:10.158370 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:10.158380 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:10.158391 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:10.158402 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:10.158413 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:10.158423 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:10.158434 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:10.158445 | orchestrator | 2026-02-09 02:56:10.158456 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-09 02:56:10.158467 | orchestrator | Monday 09 February 2026 02:55:58 +0000 (0:00:00.824) 0:07:25.130 ******* 2026-02-09 02:56:10.158486 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-09 02:56:10.158500 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-09 02:56:10.158521 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-09 02:56:10.158533 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-09 02:56:10.158544 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-09 02:56:10.158555 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-09 02:56:10.158565 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-09 02:56:10.158576 | orchestrator | 2026-02-09 02:56:10.158587 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-09 02:56:10.158654 | orchestrator | Monday 09 February 2026 02:56:00 +0000 (0:00:01.847) 0:07:26.977 ******* 2026-02-09 02:56:10.158666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:56:10.158677 | orchestrator | 2026-02-09 02:56:10.158689 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-09 02:56:10.158708 | orchestrator | Monday 09 February 2026 02:56:01 +0000 (0:00:00.834) 0:07:27.812 ******* 2026-02-09 02:56:10.158727 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:10.158745 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:10.158764 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:10.158782 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:10.158801 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:10.158820 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:10.158839 | orchestrator | changed: [testbed-manager] 2026-02-09 02:56:10.158858 | orchestrator | 2026-02-09 02:56:10.158892 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-09 02:56:39.515400 | orchestrator | Monday 09 February 2026 02:56:10 +0000 (0:00:08.636) 0:07:36.449 ******* 2026-02-09 02:56:39.515567 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:39.515694 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:39.515713 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:39.515724 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:39.515735 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:39.515746 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:39.515757 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:39.515768 | orchestrator | 2026-02-09 02:56:39.515781 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-09 02:56:39.515812 | orchestrator | Monday 09 February 2026 02:56:12 +0000 (0:00:01.953) 0:07:38.403 ******* 2026-02-09 02:56:39.515829 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:39.515840 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:39.515851 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:39.515862 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:39.515873 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:39.515883 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:39.515894 | orchestrator | 2026-02-09 02:56:39.515905 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-09 02:56:39.515917 | orchestrator | Monday 09 February 2026 02:56:13 +0000 (0:00:01.268) 0:07:39.671 ******* 2026-02-09 02:56:39.515928 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:39.515939 | orchestrator | changed: [testbed-manager] 2026-02-09 02:56:39.515950 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:39.515961 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:39.515972 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:39.516021 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:39.516034 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:39.516045 | orchestrator | 2026-02-09 02:56:39.516056 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-09 02:56:39.516067 | orchestrator | 2026-02-09 02:56:39.516078 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-09 02:56:39.516089 | orchestrator | Monday 09 February 2026 02:56:14 +0000 (0:00:01.228) 0:07:40.900 ******* 2026-02-09 02:56:39.516100 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:56:39.516110 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:56:39.516121 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:56:39.516131 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:56:39.516142 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:56:39.516153 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:56:39.516163 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:56:39.516174 | orchestrator | 2026-02-09 02:56:39.516185 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-09 02:56:39.516195 | orchestrator | 2026-02-09 02:56:39.516206 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-09 02:56:39.516217 | orchestrator | Monday 09 February 2026 02:56:15 +0000 (0:00:00.720) 0:07:41.620 ******* 2026-02-09 02:56:39.516228 | orchestrator | changed: [testbed-manager] 2026-02-09 02:56:39.516239 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:39.516250 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:39.516260 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:39.516271 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:39.516281 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:39.516292 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:39.516303 | orchestrator | 2026-02-09 02:56:39.516314 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-09 02:56:39.516341 | orchestrator | Monday 09 February 2026 02:56:16 +0000 (0:00:01.305) 0:07:42.926 ******* 2026-02-09 02:56:39.516351 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:39.516362 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:39.516373 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:39.516384 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:39.516394 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:39.516405 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:39.516415 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:39.516426 | orchestrator | 2026-02-09 02:56:39.516437 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-09 02:56:39.516453 | orchestrator | Monday 09 February 2026 02:56:18 +0000 (0:00:01.463) 0:07:44.389 ******* 2026-02-09 02:56:39.516471 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:56:39.516491 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:56:39.516508 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:56:39.516525 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:56:39.516543 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:56:39.516561 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:56:39.516579 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:56:39.516626 | orchestrator | 2026-02-09 02:56:39.516645 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-09 02:56:39.516664 | orchestrator | Monday 09 February 2026 02:56:18 +0000 (0:00:00.517) 0:07:44.907 ******* 2026-02-09 02:56:39.516685 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:56:39.516705 | orchestrator | 2026-02-09 02:56:39.516723 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-09 02:56:39.516740 | orchestrator | Monday 09 February 2026 02:56:19 +0000 (0:00:01.018) 0:07:45.925 ******* 2026-02-09 02:56:39.516770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:56:39.516796 | orchestrator | 2026-02-09 02:56:39.516807 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-09 02:56:39.516818 | orchestrator | Monday 09 February 2026 02:56:20 +0000 (0:00:00.795) 0:07:46.721 ******* 2026-02-09 02:56:39.516829 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:39.516840 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:39.516851 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:39.516861 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:39.516873 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:39.516883 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:39.516894 | orchestrator | changed: [testbed-manager] 2026-02-09 02:56:39.516909 | orchestrator | 2026-02-09 02:56:39.516954 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-09 02:56:39.516973 | orchestrator | Monday 09 February 2026 02:56:28 +0000 (0:00:08.011) 0:07:54.732 ******* 2026-02-09 02:56:39.516991 | orchestrator | changed: [testbed-manager] 2026-02-09 02:56:39.517008 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:39.517025 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:39.517040 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:39.517057 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:39.517073 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:39.517089 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:39.517106 | orchestrator | 2026-02-09 02:56:39.517124 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-09 02:56:39.517140 | orchestrator | Monday 09 February 2026 02:56:29 +0000 (0:00:00.779) 0:07:55.512 ******* 2026-02-09 02:56:39.517156 | orchestrator | changed: [testbed-manager] 2026-02-09 02:56:39.517173 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:39.517191 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:39.517210 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:39.517228 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:39.517245 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:39.517265 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:39.517284 | orchestrator | 2026-02-09 02:56:39.517303 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-09 02:56:39.517319 | orchestrator | Monday 09 February 2026 02:56:30 +0000 (0:00:01.223) 0:07:56.735 ******* 2026-02-09 02:56:39.517330 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:39.517341 | orchestrator | changed: [testbed-manager] 2026-02-09 02:56:39.517351 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:39.517362 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:39.517372 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:39.517383 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:39.517393 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:39.517404 | orchestrator | 2026-02-09 02:56:39.517415 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-09 02:56:39.517426 | orchestrator | Monday 09 February 2026 02:56:32 +0000 (0:00:01.868) 0:07:58.603 ******* 2026-02-09 02:56:39.517436 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:39.517447 | orchestrator | changed: [testbed-manager] 2026-02-09 02:56:39.517457 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:39.517468 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:39.517479 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:39.517489 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:39.517500 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:39.517511 | orchestrator | 2026-02-09 02:56:39.517521 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-09 02:56:39.517532 | orchestrator | Monday 09 February 2026 02:56:33 +0000 (0:00:01.231) 0:07:59.835 ******* 2026-02-09 02:56:39.517542 | orchestrator | changed: [testbed-manager] 2026-02-09 02:56:39.517553 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:39.517575 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:39.517585 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:39.517660 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:39.517679 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:39.517696 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:39.517714 | orchestrator | 2026-02-09 02:56:39.517734 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-09 02:56:39.517753 | orchestrator | 2026-02-09 02:56:39.517783 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-09 02:56:39.517803 | orchestrator | Monday 09 February 2026 02:56:34 +0000 (0:00:01.099) 0:08:00.935 ******* 2026-02-09 02:56:39.517822 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:56:39.517835 | orchestrator | 2026-02-09 02:56:39.517847 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-09 02:56:39.517857 | orchestrator | Monday 09 February 2026 02:56:35 +0000 (0:00:00.856) 0:08:01.791 ******* 2026-02-09 02:56:39.517868 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:39.517879 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:39.517890 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:39.517900 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:39.517911 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:39.517921 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:39.517932 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:39.517943 | orchestrator | 2026-02-09 02:56:39.517954 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-09 02:56:39.517964 | orchestrator | Monday 09 February 2026 02:56:36 +0000 (0:00:01.009) 0:08:02.801 ******* 2026-02-09 02:56:39.517976 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:39.517986 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:39.517997 | orchestrator | changed: [testbed-manager] 2026-02-09 02:56:39.518008 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:39.518124 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:39.518141 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:39.518152 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:39.518163 | orchestrator | 2026-02-09 02:56:39.518174 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-09 02:56:39.518185 | orchestrator | Monday 09 February 2026 02:56:37 +0000 (0:00:01.125) 0:08:03.927 ******* 2026-02-09 02:56:39.518196 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 02:56:39.518207 | orchestrator | 2026-02-09 02:56:39.518217 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-09 02:56:39.518228 | orchestrator | Monday 09 February 2026 02:56:38 +0000 (0:00:01.043) 0:08:04.971 ******* 2026-02-09 02:56:39.518239 | orchestrator | ok: [testbed-manager] 2026-02-09 02:56:39.518250 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:56:39.518260 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:56:39.518271 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:56:39.518282 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:56:39.518292 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:56:39.518303 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:56:39.518314 | orchestrator | 2026-02-09 02:56:39.518339 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-09 02:56:41.063061 | orchestrator | Monday 09 February 2026 02:56:39 +0000 (0:00:00.838) 0:08:05.809 ******* 2026-02-09 02:56:41.063163 | orchestrator | changed: [testbed-manager] 2026-02-09 02:56:41.063174 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:56:41.063179 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:56:41.063185 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:56:41.063190 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:56:41.063196 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:56:41.063201 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:56:41.063228 | orchestrator | 2026-02-09 02:56:41.063234 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:56:41.063241 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-09 02:56:41.063248 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-09 02:56:41.063253 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-09 02:56:41.063259 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-09 02:56:41.063264 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-09 02:56:41.063272 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-09 02:56:41.063281 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-09 02:56:41.063289 | orchestrator | 2026-02-09 02:56:41.063297 | orchestrator | 2026-02-09 02:56:41.063305 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:56:41.063314 | orchestrator | Monday 09 February 2026 02:56:40 +0000 (0:00:01.047) 0:08:06.856 ******* 2026-02-09 02:56:41.063322 | orchestrator | =============================================================================== 2026-02-09 02:56:41.063330 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.30s 2026-02-09 02:56:41.063339 | orchestrator | osism.commons.packages : Download required packages -------------------- 47.41s 2026-02-09 02:56:41.063346 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.97s 2026-02-09 02:56:41.063354 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.45s 2026-02-09 02:56:41.063363 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.20s 2026-02-09 02:56:41.063388 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.31s 2026-02-09 02:56:41.063399 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.66s 2026-02-09 02:56:41.063407 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.14s 2026-02-09 02:56:41.063416 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.64s 2026-02-09 02:56:41.063425 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.47s 2026-02-09 02:56:41.063433 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.01s 2026-02-09 02:56:41.063442 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.80s 2026-02-09 02:56:41.063450 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.38s 2026-02-09 02:56:41.063458 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.32s 2026-02-09 02:56:41.063467 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.24s 2026-02-09 02:56:41.063476 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.00s 2026-02-09 02:56:41.063485 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.33s 2026-02-09 02:56:41.063494 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.81s 2026-02-09 02:56:41.063502 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.65s 2026-02-09 02:56:41.063511 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.57s 2026-02-09 02:56:41.398378 | orchestrator | + osism apply fail2ban 2026-02-09 02:56:54.327283 | orchestrator | 2026-02-09 02:56:54 | INFO  | Task d29deb53-5ade-4397-b530-e50acab2cbd9 (fail2ban) was prepared for execution. 2026-02-09 02:56:54.327384 | orchestrator | 2026-02-09 02:56:54 | INFO  | It takes a moment until task d29deb53-5ade-4397-b530-e50acab2cbd9 (fail2ban) has been started and output is visible here. 2026-02-09 02:57:16.440188 | orchestrator | 2026-02-09 02:57:16.440298 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-09 02:57:16.440316 | orchestrator | 2026-02-09 02:57:16.440330 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-09 02:57:16.440343 | orchestrator | Monday 09 February 2026 02:56:59 +0000 (0:00:00.300) 0:00:00.301 ******* 2026-02-09 02:57:16.440357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 02:57:16.440373 | orchestrator | 2026-02-09 02:57:16.440386 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-09 02:57:16.440397 | orchestrator | Monday 09 February 2026 02:57:00 +0000 (0:00:01.220) 0:00:01.521 ******* 2026-02-09 02:57:16.440410 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:57:16.440425 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:57:16.440439 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:57:16.440452 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:57:16.440466 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:57:16.440478 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:57:16.440492 | orchestrator | changed: [testbed-manager] 2026-02-09 02:57:16.440505 | orchestrator | 2026-02-09 02:57:16.440517 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-09 02:57:16.440530 | orchestrator | Monday 09 February 2026 02:57:11 +0000 (0:00:11.152) 0:00:12.674 ******* 2026-02-09 02:57:16.440544 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:57:16.440557 | orchestrator | changed: [testbed-manager] 2026-02-09 02:57:16.440686 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:57:16.440707 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:57:16.440727 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:57:16.440741 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:57:16.440754 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:57:16.440766 | orchestrator | 2026-02-09 02:57:16.440779 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-09 02:57:16.440793 | orchestrator | Monday 09 February 2026 02:57:12 +0000 (0:00:01.424) 0:00:14.098 ******* 2026-02-09 02:57:16.440807 | orchestrator | ok: [testbed-manager] 2026-02-09 02:57:16.440821 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:57:16.440834 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:57:16.440848 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:57:16.440862 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:57:16.440875 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:57:16.440888 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:57:16.440911 | orchestrator | 2026-02-09 02:57:16.440918 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-09 02:57:16.440926 | orchestrator | Monday 09 February 2026 02:57:14 +0000 (0:00:01.447) 0:00:15.546 ******* 2026-02-09 02:57:16.440934 | orchestrator | changed: [testbed-manager] 2026-02-09 02:57:16.440941 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:57:16.440949 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:57:16.440956 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:57:16.440963 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:57:16.440970 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:57:16.440977 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:57:16.440985 | orchestrator | 2026-02-09 02:57:16.440992 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:57:16.441000 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:57:16.441036 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:57:16.441044 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:57:16.441052 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:57:16.441059 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:57:16.441067 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:57:16.441074 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:57:16.441081 | orchestrator | 2026-02-09 02:57:16.441088 | orchestrator | 2026-02-09 02:57:16.441095 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:57:16.441102 | orchestrator | Monday 09 February 2026 02:57:16 +0000 (0:00:01.608) 0:00:17.154 ******* 2026-02-09 02:57:16.441115 | orchestrator | =============================================================================== 2026-02-09 02:57:16.441127 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.15s 2026-02-09 02:57:16.441139 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.61s 2026-02-09 02:57:16.441151 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.45s 2026-02-09 02:57:16.441163 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.42s 2026-02-09 02:57:16.441175 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.22s 2026-02-09 02:57:16.768218 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-09 02:57:16.768301 | orchestrator | + osism apply network 2026-02-09 02:57:28.842462 | orchestrator | 2026-02-09 02:57:28 | INFO  | Task b1439bca-23b2-4686-8e8d-8aaf8e430397 (network) was prepared for execution. 2026-02-09 02:57:28.842574 | orchestrator | 2026-02-09 02:57:28 | INFO  | It takes a moment until task b1439bca-23b2-4686-8e8d-8aaf8e430397 (network) has been started and output is visible here. 2026-02-09 02:57:56.751708 | orchestrator | 2026-02-09 02:57:56.751852 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-09 02:57:56.751883 | orchestrator | 2026-02-09 02:57:56.751905 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-09 02:57:56.751925 | orchestrator | Monday 09 February 2026 02:57:33 +0000 (0:00:00.266) 0:00:00.266 ******* 2026-02-09 02:57:56.751945 | orchestrator | ok: [testbed-manager] 2026-02-09 02:57:56.751968 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:57:56.751987 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:57:56.752007 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:57:56.752028 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:57:56.752048 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:57:56.752066 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:57:56.752077 | orchestrator | 2026-02-09 02:57:56.752088 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-09 02:57:56.752099 | orchestrator | Monday 09 February 2026 02:57:33 +0000 (0:00:00.719) 0:00:00.986 ******* 2026-02-09 02:57:56.752137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 02:57:56.752151 | orchestrator | 2026-02-09 02:57:56.752162 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-09 02:57:56.752197 | orchestrator | Monday 09 February 2026 02:57:35 +0000 (0:00:01.215) 0:00:02.202 ******* 2026-02-09 02:57:56.752210 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:57:56.752223 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:57:56.752236 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:57:56.752249 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:57:56.752263 | orchestrator | ok: [testbed-manager] 2026-02-09 02:57:56.752275 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:57:56.752288 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:57:56.752301 | orchestrator | 2026-02-09 02:57:56.752314 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-09 02:57:56.752326 | orchestrator | Monday 09 February 2026 02:57:36 +0000 (0:00:01.688) 0:00:03.890 ******* 2026-02-09 02:57:56.752340 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:57:56.752352 | orchestrator | ok: [testbed-manager] 2026-02-09 02:57:56.752366 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:57:56.752378 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:57:56.752391 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:57:56.752403 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:57:56.752416 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:57:56.752428 | orchestrator | 2026-02-09 02:57:56.752442 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-09 02:57:56.752469 | orchestrator | Monday 09 February 2026 02:57:38 +0000 (0:00:01.719) 0:00:05.610 ******* 2026-02-09 02:57:56.752493 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-09 02:57:56.752507 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-09 02:57:56.752520 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-09 02:57:56.752532 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-09 02:57:56.752546 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-09 02:57:56.752559 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-09 02:57:56.752571 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-09 02:57:56.752582 | orchestrator | 2026-02-09 02:57:56.752654 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-09 02:57:56.752673 | orchestrator | Monday 09 February 2026 02:57:39 +0000 (0:00:00.964) 0:00:06.574 ******* 2026-02-09 02:57:56.752685 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 02:57:56.752697 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 02:57:56.752707 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-09 02:57:56.752718 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-09 02:57:56.752729 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-09 02:57:56.752851 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-09 02:57:56.752868 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-09 02:57:56.752879 | orchestrator | 2026-02-09 02:57:56.752890 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-09 02:57:56.752901 | orchestrator | Monday 09 February 2026 02:57:42 +0000 (0:00:03.352) 0:00:09.927 ******* 2026-02-09 02:57:56.752923 | orchestrator | changed: [testbed-manager] 2026-02-09 02:57:56.752934 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:57:56.752958 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:57:56.752969 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:57:56.752980 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:57:56.752991 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:57:56.753002 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:57:56.753012 | orchestrator | 2026-02-09 02:57:56.753023 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-09 02:57:56.753034 | orchestrator | Monday 09 February 2026 02:57:44 +0000 (0:00:01.562) 0:00:11.489 ******* 2026-02-09 02:57:56.753045 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-09 02:57:56.753056 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 02:57:56.753067 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 02:57:56.753077 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-09 02:57:56.753105 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-09 02:57:56.753124 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-09 02:57:56.753142 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-09 02:57:56.753161 | orchestrator | 2026-02-09 02:57:56.753179 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-09 02:57:56.753197 | orchestrator | Monday 09 February 2026 02:57:46 +0000 (0:00:01.700) 0:00:13.190 ******* 2026-02-09 02:57:56.753215 | orchestrator | ok: [testbed-manager] 2026-02-09 02:57:56.753235 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:57:56.753254 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:57:56.753272 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:57:56.753293 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:57:56.753311 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:57:56.753328 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:57:56.753347 | orchestrator | 2026-02-09 02:57:56.753366 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-09 02:57:56.753412 | orchestrator | Monday 09 February 2026 02:57:47 +0000 (0:00:01.125) 0:00:14.315 ******* 2026-02-09 02:57:56.753433 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:57:56.753445 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:57:56.753456 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:57:56.753467 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:57:56.753477 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:57:56.753488 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:57:56.753499 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:57:56.753510 | orchestrator | 2026-02-09 02:57:56.753521 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-09 02:57:56.753532 | orchestrator | Monday 09 February 2026 02:57:47 +0000 (0:00:00.668) 0:00:14.984 ******* 2026-02-09 02:57:56.753543 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:57:56.753553 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:57:56.753564 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:57:56.753575 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:57:56.753586 | orchestrator | ok: [testbed-manager] 2026-02-09 02:57:56.753628 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:57:56.753640 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:57:56.753651 | orchestrator | 2026-02-09 02:57:56.753662 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-09 02:57:56.753673 | orchestrator | Monday 09 February 2026 02:57:49 +0000 (0:00:01.859) 0:00:16.843 ******* 2026-02-09 02:57:56.753683 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:57:56.753694 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:57:56.753705 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:57:56.753716 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:57:56.753727 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:57:56.753737 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:57:56.753749 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-09 02:57:56.753762 | orchestrator | 2026-02-09 02:57:56.753773 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-09 02:57:56.753784 | orchestrator | Monday 09 February 2026 02:57:50 +0000 (0:00:00.920) 0:00:17.764 ******* 2026-02-09 02:57:56.753795 | orchestrator | ok: [testbed-manager] 2026-02-09 02:57:56.753806 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:57:56.753816 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:57:56.753827 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:57:56.753838 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:57:56.753849 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:57:56.753859 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:57:56.753870 | orchestrator | 2026-02-09 02:57:56.753881 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-09 02:57:56.753892 | orchestrator | Monday 09 February 2026 02:57:52 +0000 (0:00:01.675) 0:00:19.439 ******* 2026-02-09 02:57:56.753904 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 02:57:56.753927 | orchestrator | 2026-02-09 02:57:56.753938 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-09 02:57:56.753949 | orchestrator | Monday 09 February 2026 02:57:53 +0000 (0:00:01.378) 0:00:20.818 ******* 2026-02-09 02:57:56.753960 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:57:56.753971 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:57:56.753982 | orchestrator | ok: [testbed-manager] 2026-02-09 02:57:56.753993 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:57:56.754011 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:57:56.754150 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:57:56.754164 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:57:56.754175 | orchestrator | 2026-02-09 02:57:56.754186 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-09 02:57:56.754197 | orchestrator | Monday 09 February 2026 02:57:54 +0000 (0:00:00.891) 0:00:21.709 ******* 2026-02-09 02:57:56.754208 | orchestrator | ok: [testbed-manager] 2026-02-09 02:57:56.754219 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:57:56.754230 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:57:56.754240 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:57:56.754251 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:57:56.754262 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:57:56.754273 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:57:56.754283 | orchestrator | 2026-02-09 02:57:56.754294 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-09 02:57:56.754305 | orchestrator | Monday 09 February 2026 02:57:55 +0000 (0:00:00.892) 0:00:22.601 ******* 2026-02-09 02:57:56.754316 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-09 02:57:56.754328 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-09 02:57:56.754339 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-09 02:57:56.754349 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-09 02:57:56.754361 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-09 02:57:56.754371 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-09 02:57:56.754382 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-09 02:57:56.754393 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-09 02:57:56.754404 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-09 02:57:56.754414 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-09 02:57:56.754425 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-09 02:57:56.754436 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-09 02:57:56.754446 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-09 02:57:56.754457 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-09 02:57:56.754468 | orchestrator | 2026-02-09 02:57:56.754490 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-09 02:58:13.594324 | orchestrator | Monday 09 February 2026 02:57:56 +0000 (0:00:01.212) 0:00:23.814 ******* 2026-02-09 02:58:13.594440 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:58:13.594456 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:58:13.594468 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:58:13.594479 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:58:13.594490 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:58:13.594501 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:58:13.594512 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:58:13.594523 | orchestrator | 2026-02-09 02:58:13.594561 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-09 02:58:13.594573 | orchestrator | Monday 09 February 2026 02:57:57 +0000 (0:00:00.682) 0:00:24.497 ******* 2026-02-09 02:58:13.594585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-5, testbed-node-4 2026-02-09 02:58:13.594629 | orchestrator | 2026-02-09 02:58:13.594644 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-09 02:58:13.594655 | orchestrator | Monday 09 February 2026 02:58:01 +0000 (0:00:04.549) 0:00:29.046 ******* 2026-02-09 02:58:13.594668 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.594682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.594694 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.594705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.594717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.594744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.594756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.594767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.594778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.594790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.594808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.594837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.594859 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.594872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.594885 | orchestrator | 2026-02-09 02:58:13.594900 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-09 02:58:13.594913 | orchestrator | Monday 09 February 2026 02:58:07 +0000 (0:00:05.903) 0:00:34.950 ******* 2026-02-09 02:58:13.594926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.594939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.594952 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.594965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.594978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.594996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.595010 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.595023 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.595035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.595048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-09 02:58:13.595062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.595082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:13.595103 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:20.250322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-09 02:58:20.250416 | orchestrator | 2026-02-09 02:58:20.250429 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-09 02:58:20.250438 | orchestrator | Monday 09 February 2026 02:58:13 +0000 (0:00:05.708) 0:00:40.658 ******* 2026-02-09 02:58:20.250446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 02:58:20.250453 | orchestrator | 2026-02-09 02:58:20.250459 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-09 02:58:20.250466 | orchestrator | Monday 09 February 2026 02:58:14 +0000 (0:00:01.301) 0:00:41.959 ******* 2026-02-09 02:58:20.250472 | orchestrator | ok: [testbed-manager] 2026-02-09 02:58:20.250479 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:58:20.250486 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:58:20.250491 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:58:20.250498 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:58:20.250504 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:58:20.250510 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:58:20.250516 | orchestrator | 2026-02-09 02:58:20.250522 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-09 02:58:20.250528 | orchestrator | Monday 09 February 2026 02:58:16 +0000 (0:00:01.231) 0:00:43.191 ******* 2026-02-09 02:58:20.250535 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-09 02:58:20.250542 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-09 02:58:20.250549 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-09 02:58:20.250555 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-09 02:58:20.250561 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-09 02:58:20.250567 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-09 02:58:20.250574 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-09 02:58:20.250580 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-09 02:58:20.250587 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:58:20.250595 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-09 02:58:20.250666 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-09 02:58:20.250692 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-09 02:58:20.250700 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-09 02:58:20.250707 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:58:20.250733 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-09 02:58:20.250740 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-09 02:58:20.250747 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-09 02:58:20.250754 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:58:20.250760 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-09 02:58:20.250767 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-09 02:58:20.250774 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-09 02:58:20.250781 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-09 02:58:20.250788 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-09 02:58:20.250795 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:58:20.250802 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-09 02:58:20.250809 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-09 02:58:20.250814 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-09 02:58:20.250820 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-09 02:58:20.250827 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:58:20.250834 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:58:20.250841 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-09 02:58:20.250847 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-09 02:58:20.250854 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-09 02:58:20.250860 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-09 02:58:20.250867 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:58:20.250875 | orchestrator | 2026-02-09 02:58:20.250881 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-09 02:58:20.250907 | orchestrator | Monday 09 February 2026 02:58:18 +0000 (0:00:02.257) 0:00:45.449 ******* 2026-02-09 02:58:20.250914 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:58:20.250921 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:58:20.250928 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:58:20.250934 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:58:20.250940 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:58:20.250947 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:58:20.250953 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:58:20.250959 | orchestrator | 2026-02-09 02:58:20.250966 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-09 02:58:20.250972 | orchestrator | Monday 09 February 2026 02:58:19 +0000 (0:00:00.679) 0:00:46.128 ******* 2026-02-09 02:58:20.250979 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:58:20.250985 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:58:20.250991 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:58:20.250998 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:58:20.251005 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:58:20.251011 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:58:20.251017 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:58:20.251023 | orchestrator | 2026-02-09 02:58:20.251030 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:58:20.251037 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 02:58:20.251045 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 02:58:20.251059 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 02:58:20.251066 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 02:58:20.251072 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 02:58:20.251078 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 02:58:20.251085 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 02:58:20.251091 | orchestrator | 2026-02-09 02:58:20.251097 | orchestrator | 2026-02-09 02:58:20.251103 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:58:20.251109 | orchestrator | Monday 09 February 2026 02:58:19 +0000 (0:00:00.770) 0:00:46.898 ******* 2026-02-09 02:58:20.251119 | orchestrator | =============================================================================== 2026-02-09 02:58:20.251125 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.90s 2026-02-09 02:58:20.251132 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.71s 2026-02-09 02:58:20.251138 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.55s 2026-02-09 02:58:20.251144 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.35s 2026-02-09 02:58:20.251150 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.26s 2026-02-09 02:58:20.251156 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 1.86s 2026-02-09 02:58:20.251163 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.72s 2026-02-09 02:58:20.251192 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.70s 2026-02-09 02:58:20.251199 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.69s 2026-02-09 02:58:20.251205 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2026-02-09 02:58:20.251211 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.56s 2026-02-09 02:58:20.251217 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.38s 2026-02-09 02:58:20.251223 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.30s 2026-02-09 02:58:20.251229 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.23s 2026-02-09 02:58:20.251235 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2026-02-09 02:58:20.251242 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.21s 2026-02-09 02:58:20.251248 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2026-02-09 02:58:20.251254 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2026-02-09 02:58:20.251260 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.92s 2026-02-09 02:58:20.251267 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.89s 2026-02-09 02:58:20.598232 | orchestrator | + osism apply wireguard 2026-02-09 02:58:32.709100 | orchestrator | 2026-02-09 02:58:32 | INFO  | Task 4247e6da-bc93-44f5-a27c-5d7a0f7e6870 (wireguard) was prepared for execution. 2026-02-09 02:58:32.709213 | orchestrator | 2026-02-09 02:58:32 | INFO  | It takes a moment until task 4247e6da-bc93-44f5-a27c-5d7a0f7e6870 (wireguard) has been started and output is visible here. 2026-02-09 02:58:53.562709 | orchestrator | 2026-02-09 02:58:53.562842 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-09 02:58:53.562886 | orchestrator | 2026-02-09 02:58:53.562899 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-09 02:58:53.562922 | orchestrator | Monday 09 February 2026 02:58:37 +0000 (0:00:00.237) 0:00:00.237 ******* 2026-02-09 02:58:53.562934 | orchestrator | ok: [testbed-manager] 2026-02-09 02:58:53.562946 | orchestrator | 2026-02-09 02:58:53.562957 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-09 02:58:53.562968 | orchestrator | Monday 09 February 2026 02:58:38 +0000 (0:00:01.643) 0:00:01.880 ******* 2026-02-09 02:58:53.562979 | orchestrator | changed: [testbed-manager] 2026-02-09 02:58:53.562996 | orchestrator | 2026-02-09 02:58:53.563008 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-09 02:58:53.563019 | orchestrator | Monday 09 February 2026 02:58:45 +0000 (0:00:06.711) 0:00:08.592 ******* 2026-02-09 02:58:53.563030 | orchestrator | changed: [testbed-manager] 2026-02-09 02:58:53.563041 | orchestrator | 2026-02-09 02:58:53.563052 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-09 02:58:53.563063 | orchestrator | Monday 09 February 2026 02:58:46 +0000 (0:00:00.537) 0:00:09.130 ******* 2026-02-09 02:58:53.563074 | orchestrator | changed: [testbed-manager] 2026-02-09 02:58:53.563085 | orchestrator | 2026-02-09 02:58:53.563096 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-09 02:58:53.563107 | orchestrator | Monday 09 February 2026 02:58:46 +0000 (0:00:00.451) 0:00:09.581 ******* 2026-02-09 02:58:53.563118 | orchestrator | ok: [testbed-manager] 2026-02-09 02:58:53.563128 | orchestrator | 2026-02-09 02:58:53.563139 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-09 02:58:53.563150 | orchestrator | Monday 09 February 2026 02:58:47 +0000 (0:00:00.678) 0:00:10.260 ******* 2026-02-09 02:58:53.563161 | orchestrator | ok: [testbed-manager] 2026-02-09 02:58:53.563172 | orchestrator | 2026-02-09 02:58:53.563183 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-09 02:58:53.563195 | orchestrator | Monday 09 February 2026 02:58:47 +0000 (0:00:00.461) 0:00:10.722 ******* 2026-02-09 02:58:53.563209 | orchestrator | ok: [testbed-manager] 2026-02-09 02:58:53.563221 | orchestrator | 2026-02-09 02:58:53.563233 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-09 02:58:53.563247 | orchestrator | Monday 09 February 2026 02:58:48 +0000 (0:00:00.446) 0:00:11.168 ******* 2026-02-09 02:58:53.563260 | orchestrator | changed: [testbed-manager] 2026-02-09 02:58:53.563273 | orchestrator | 2026-02-09 02:58:53.563286 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-09 02:58:53.563299 | orchestrator | Monday 09 February 2026 02:58:49 +0000 (0:00:01.155) 0:00:12.323 ******* 2026-02-09 02:58:53.563312 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-09 02:58:53.563325 | orchestrator | changed: [testbed-manager] 2026-02-09 02:58:53.563338 | orchestrator | 2026-02-09 02:58:53.563351 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-09 02:58:53.563363 | orchestrator | Monday 09 February 2026 02:58:50 +0000 (0:00:01.010) 0:00:13.333 ******* 2026-02-09 02:58:53.563376 | orchestrator | changed: [testbed-manager] 2026-02-09 02:58:53.563389 | orchestrator | 2026-02-09 02:58:53.563402 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-09 02:58:53.563415 | orchestrator | Monday 09 February 2026 02:58:52 +0000 (0:00:01.847) 0:00:15.181 ******* 2026-02-09 02:58:53.563427 | orchestrator | changed: [testbed-manager] 2026-02-09 02:58:53.563440 | orchestrator | 2026-02-09 02:58:53.563453 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:58:53.563466 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 02:58:53.563479 | orchestrator | 2026-02-09 02:58:53.563492 | orchestrator | 2026-02-09 02:58:53.563505 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:58:53.563526 | orchestrator | Monday 09 February 2026 02:58:53 +0000 (0:00:00.935) 0:00:16.117 ******* 2026-02-09 02:58:53.563539 | orchestrator | =============================================================================== 2026-02-09 02:58:53.563552 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.71s 2026-02-09 02:58:53.563565 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.85s 2026-02-09 02:58:53.563577 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.64s 2026-02-09 02:58:53.563588 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.16s 2026-02-09 02:58:53.563599 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2026-02-09 02:58:53.563626 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2026-02-09 02:58:53.563637 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.68s 2026-02-09 02:58:53.563648 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2026-02-09 02:58:53.563659 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2026-02-09 02:58:53.563670 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-02-09 02:58:53.563681 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2026-02-09 02:58:53.927846 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-09 02:58:53.963391 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-09 02:58:53.963456 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-09 02:58:54.043768 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 173 0 --:--:-- --:--:-- --:--:-- 175 2026-02-09 02:58:54.055770 | orchestrator | + osism apply --environment custom workarounds 2026-02-09 02:58:56.023378 | orchestrator | 2026-02-09 02:58:56 | INFO  | Trying to run play workarounds in environment custom 2026-02-09 02:59:06.144401 | orchestrator | 2026-02-09 02:59:06 | INFO  | Task 9a4e2eba-0d8c-430a-ad73-9fccf6cbfa27 (workarounds) was prepared for execution. 2026-02-09 02:59:06.144509 | orchestrator | 2026-02-09 02:59:06 | INFO  | It takes a moment until task 9a4e2eba-0d8c-430a-ad73-9fccf6cbfa27 (workarounds) has been started and output is visible here. 2026-02-09 02:59:32.055192 | orchestrator | 2026-02-09 02:59:32.055293 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 02:59:32.055349 | orchestrator | 2026-02-09 02:59:32.055359 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-09 02:59:32.055367 | orchestrator | Monday 09 February 2026 02:59:10 +0000 (0:00:00.140) 0:00:00.140 ******* 2026-02-09 02:59:32.055376 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-09 02:59:32.055384 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-09 02:59:32.055391 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-09 02:59:32.055399 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-09 02:59:32.055406 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-09 02:59:32.055413 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-09 02:59:32.055421 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-09 02:59:32.055428 | orchestrator | 2026-02-09 02:59:32.055435 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-09 02:59:32.055443 | orchestrator | 2026-02-09 02:59:32.055450 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-09 02:59:32.055458 | orchestrator | Monday 09 February 2026 02:59:11 +0000 (0:00:00.859) 0:00:01.000 ******* 2026-02-09 02:59:32.055465 | orchestrator | ok: [testbed-manager] 2026-02-09 02:59:32.055495 | orchestrator | 2026-02-09 02:59:32.055503 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-09 02:59:32.055510 | orchestrator | 2026-02-09 02:59:32.055518 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-09 02:59:32.055526 | orchestrator | Monday 09 February 2026 02:59:13 +0000 (0:00:02.512) 0:00:03.513 ******* 2026-02-09 02:59:32.055533 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:59:32.055540 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:59:32.055547 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:59:32.055554 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:59:32.055562 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:59:32.055569 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:59:32.055576 | orchestrator | 2026-02-09 02:59:32.055583 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-09 02:59:32.055590 | orchestrator | 2026-02-09 02:59:32.055598 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-09 02:59:32.055696 | orchestrator | Monday 09 February 2026 02:59:15 +0000 (0:00:01.803) 0:00:05.316 ******* 2026-02-09 02:59:32.055714 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-09 02:59:32.055723 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-09 02:59:32.055730 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-09 02:59:32.055738 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-09 02:59:32.055745 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-09 02:59:32.055752 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-09 02:59:32.055761 | orchestrator | 2026-02-09 02:59:32.055769 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-09 02:59:32.055778 | orchestrator | Monday 09 February 2026 02:59:17 +0000 (0:00:01.479) 0:00:06.796 ******* 2026-02-09 02:59:32.055788 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:59:32.055796 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:59:32.055805 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:59:32.055813 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:59:32.055822 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:59:32.055830 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:59:32.055839 | orchestrator | 2026-02-09 02:59:32.055848 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-09 02:59:32.055857 | orchestrator | Monday 09 February 2026 02:59:20 +0000 (0:00:03.637) 0:00:10.433 ******* 2026-02-09 02:59:32.055866 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:59:32.055875 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:59:32.055884 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:59:32.055892 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:59:32.055900 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:59:32.055909 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:59:32.055918 | orchestrator | 2026-02-09 02:59:32.055927 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-09 02:59:32.055935 | orchestrator | 2026-02-09 02:59:32.055944 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-09 02:59:32.055953 | orchestrator | Monday 09 February 2026 02:59:21 +0000 (0:00:00.805) 0:00:11.239 ******* 2026-02-09 02:59:32.055961 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:59:32.055970 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:59:32.055978 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:59:32.055987 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:59:32.055995 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:59:32.056004 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:59:32.056020 | orchestrator | changed: [testbed-manager] 2026-02-09 02:59:32.056028 | orchestrator | 2026-02-09 02:59:32.056063 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-09 02:59:32.056072 | orchestrator | Monday 09 February 2026 02:59:23 +0000 (0:00:01.777) 0:00:13.017 ******* 2026-02-09 02:59:32.056080 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:59:32.056089 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:59:32.056098 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:59:32.056106 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:59:32.056115 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:59:32.056124 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:59:32.056149 | orchestrator | changed: [testbed-manager] 2026-02-09 02:59:32.056157 | orchestrator | 2026-02-09 02:59:32.056165 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-09 02:59:32.056172 | orchestrator | Monday 09 February 2026 02:59:24 +0000 (0:00:01.602) 0:00:14.619 ******* 2026-02-09 02:59:32.056179 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:59:32.056187 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:59:32.056194 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:59:32.056201 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:59:32.056208 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:59:32.056215 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:59:32.056223 | orchestrator | ok: [testbed-manager] 2026-02-09 02:59:32.056230 | orchestrator | 2026-02-09 02:59:32.056237 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-09 02:59:32.056244 | orchestrator | Monday 09 February 2026 02:59:26 +0000 (0:00:01.599) 0:00:16.218 ******* 2026-02-09 02:59:32.056251 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:59:32.056259 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:59:32.056266 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:59:32.056273 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:59:32.056280 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:59:32.056287 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:59:32.056294 | orchestrator | changed: [testbed-manager] 2026-02-09 02:59:32.056302 | orchestrator | 2026-02-09 02:59:32.056309 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-09 02:59:32.056354 | orchestrator | Monday 09 February 2026 02:59:28 +0000 (0:00:02.121) 0:00:18.340 ******* 2026-02-09 02:59:32.056361 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:59:32.056387 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:59:32.056395 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:59:32.056402 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:59:32.056409 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:59:32.056416 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:59:32.056424 | orchestrator | skipping: [testbed-manager] 2026-02-09 02:59:32.056431 | orchestrator | 2026-02-09 02:59:32.056438 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-09 02:59:32.056445 | orchestrator | 2026-02-09 02:59:32.056452 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-09 02:59:32.056459 | orchestrator | Monday 09 February 2026 02:59:29 +0000 (0:00:00.618) 0:00:18.959 ******* 2026-02-09 02:59:32.056469 | orchestrator | ok: [testbed-node-0] 2026-02-09 02:59:32.056504 | orchestrator | ok: [testbed-node-2] 2026-02-09 02:59:32.056511 | orchestrator | ok: [testbed-node-3] 2026-02-09 02:59:32.056519 | orchestrator | ok: [testbed-node-1] 2026-02-09 02:59:32.056526 | orchestrator | ok: [testbed-node-5] 2026-02-09 02:59:32.056538 | orchestrator | ok: [testbed-node-4] 2026-02-09 02:59:32.056545 | orchestrator | ok: [testbed-manager] 2026-02-09 02:59:32.056552 | orchestrator | 2026-02-09 02:59:32.056559 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:59:32.056568 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 02:59:32.056577 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:32.056590 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:32.056597 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:32.056605 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:32.056612 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:32.056671 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:32.056679 | orchestrator | 2026-02-09 02:59:32.056687 | orchestrator | 2026-02-09 02:59:32.056694 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:59:32.056702 | orchestrator | Monday 09 February 2026 02:59:32 +0000 (0:00:02.830) 0:00:21.789 ******* 2026-02-09 02:59:32.056709 | orchestrator | =============================================================================== 2026-02-09 02:59:32.056716 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.64s 2026-02-09 02:59:32.056726 | orchestrator | Install python3-docker -------------------------------------------------- 2.83s 2026-02-09 02:59:32.056738 | orchestrator | Apply netplan configuration --------------------------------------------- 2.51s 2026-02-09 02:59:32.056745 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.12s 2026-02-09 02:59:32.056752 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2026-02-09 02:59:32.056759 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.78s 2026-02-09 02:59:32.056766 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.60s 2026-02-09 02:59:32.056774 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.60s 2026-02-09 02:59:32.056781 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.48s 2026-02-09 02:59:32.056788 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.86s 2026-02-09 02:59:32.056795 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.81s 2026-02-09 02:59:32.056809 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2026-02-09 02:59:33.007227 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-09 02:59:45.391970 | orchestrator | 2026-02-09 02:59:45 | INFO  | Task 42e9a96d-c043-4980-9de9-dadbd3b8441a (reboot) was prepared for execution. 2026-02-09 02:59:45.392043 | orchestrator | 2026-02-09 02:59:45 | INFO  | It takes a moment until task 42e9a96d-c043-4980-9de9-dadbd3b8441a (reboot) has been started and output is visible here. 2026-02-09 02:59:56.643573 | orchestrator | 2026-02-09 02:59:56.643726 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-09 02:59:56.643743 | orchestrator | 2026-02-09 02:59:56.643752 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-09 02:59:56.643762 | orchestrator | Monday 09 February 2026 02:59:50 +0000 (0:00:00.233) 0:00:00.233 ******* 2026-02-09 02:59:56.643772 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:59:56.643783 | orchestrator | 2026-02-09 02:59:56.643794 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-09 02:59:56.643805 | orchestrator | Monday 09 February 2026 02:59:50 +0000 (0:00:00.130) 0:00:00.364 ******* 2026-02-09 02:59:56.643815 | orchestrator | changed: [testbed-node-0] 2026-02-09 02:59:56.643826 | orchestrator | 2026-02-09 02:59:56.643836 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-09 02:59:56.643871 | orchestrator | Monday 09 February 2026 02:59:51 +0000 (0:00:01.033) 0:00:01.397 ******* 2026-02-09 02:59:56.643878 | orchestrator | skipping: [testbed-node-0] 2026-02-09 02:59:56.643884 | orchestrator | 2026-02-09 02:59:56.643890 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-09 02:59:56.643896 | orchestrator | 2026-02-09 02:59:56.643902 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-09 02:59:56.643907 | orchestrator | Monday 09 February 2026 02:59:51 +0000 (0:00:00.157) 0:00:01.555 ******* 2026-02-09 02:59:56.643913 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:59:56.643919 | orchestrator | 2026-02-09 02:59:56.643925 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-09 02:59:56.643931 | orchestrator | Monday 09 February 2026 02:59:51 +0000 (0:00:00.115) 0:00:01.671 ******* 2026-02-09 02:59:56.643937 | orchestrator | changed: [testbed-node-1] 2026-02-09 02:59:56.643943 | orchestrator | 2026-02-09 02:59:56.643949 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-09 02:59:56.643966 | orchestrator | Monday 09 February 2026 02:59:52 +0000 (0:00:00.703) 0:00:02.374 ******* 2026-02-09 02:59:56.643972 | orchestrator | skipping: [testbed-node-1] 2026-02-09 02:59:56.643978 | orchestrator | 2026-02-09 02:59:56.643984 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-09 02:59:56.643990 | orchestrator | 2026-02-09 02:59:56.643995 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-09 02:59:56.644001 | orchestrator | Monday 09 February 2026 02:59:52 +0000 (0:00:00.127) 0:00:02.501 ******* 2026-02-09 02:59:56.644007 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:59:56.644013 | orchestrator | 2026-02-09 02:59:56.644018 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-09 02:59:56.644024 | orchestrator | Monday 09 February 2026 02:59:52 +0000 (0:00:00.215) 0:00:02.717 ******* 2026-02-09 02:59:56.644030 | orchestrator | changed: [testbed-node-2] 2026-02-09 02:59:56.644036 | orchestrator | 2026-02-09 02:59:56.644042 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-09 02:59:56.644048 | orchestrator | Monday 09 February 2026 02:59:53 +0000 (0:00:00.693) 0:00:03.410 ******* 2026-02-09 02:59:56.644054 | orchestrator | skipping: [testbed-node-2] 2026-02-09 02:59:56.644059 | orchestrator | 2026-02-09 02:59:56.644065 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-09 02:59:56.644071 | orchestrator | 2026-02-09 02:59:56.644077 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-09 02:59:56.644083 | orchestrator | Monday 09 February 2026 02:59:53 +0000 (0:00:00.127) 0:00:03.537 ******* 2026-02-09 02:59:56.644088 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:59:56.644094 | orchestrator | 2026-02-09 02:59:56.644100 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-09 02:59:56.644106 | orchestrator | Monday 09 February 2026 02:59:53 +0000 (0:00:00.118) 0:00:03.656 ******* 2026-02-09 02:59:56.644111 | orchestrator | changed: [testbed-node-3] 2026-02-09 02:59:56.644117 | orchestrator | 2026-02-09 02:59:56.644123 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-09 02:59:56.644130 | orchestrator | Monday 09 February 2026 02:59:54 +0000 (0:00:00.655) 0:00:04.312 ******* 2026-02-09 02:59:56.644136 | orchestrator | skipping: [testbed-node-3] 2026-02-09 02:59:56.644143 | orchestrator | 2026-02-09 02:59:56.644150 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-09 02:59:56.644156 | orchestrator | 2026-02-09 02:59:56.644163 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-09 02:59:56.644170 | orchestrator | Monday 09 February 2026 02:59:54 +0000 (0:00:00.137) 0:00:04.449 ******* 2026-02-09 02:59:56.644176 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:59:56.644183 | orchestrator | 2026-02-09 02:59:56.644190 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-09 02:59:56.644203 | orchestrator | Monday 09 February 2026 02:59:54 +0000 (0:00:00.138) 0:00:04.588 ******* 2026-02-09 02:59:56.644209 | orchestrator | changed: [testbed-node-4] 2026-02-09 02:59:56.644217 | orchestrator | 2026-02-09 02:59:56.644223 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-09 02:59:56.644230 | orchestrator | Monday 09 February 2026 02:59:55 +0000 (0:00:00.687) 0:00:05.276 ******* 2026-02-09 02:59:56.644236 | orchestrator | skipping: [testbed-node-4] 2026-02-09 02:59:56.644243 | orchestrator | 2026-02-09 02:59:56.644250 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-09 02:59:56.644257 | orchestrator | 2026-02-09 02:59:56.644264 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-09 02:59:56.644271 | orchestrator | Monday 09 February 2026 02:59:55 +0000 (0:00:00.155) 0:00:05.432 ******* 2026-02-09 02:59:56.644278 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:59:56.644284 | orchestrator | 2026-02-09 02:59:56.644290 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-09 02:59:56.644296 | orchestrator | Monday 09 February 2026 02:59:55 +0000 (0:00:00.128) 0:00:05.561 ******* 2026-02-09 02:59:56.644302 | orchestrator | changed: [testbed-node-5] 2026-02-09 02:59:56.644307 | orchestrator | 2026-02-09 02:59:56.644313 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-09 02:59:56.644319 | orchestrator | Monday 09 February 2026 02:59:56 +0000 (0:00:00.663) 0:00:06.224 ******* 2026-02-09 02:59:56.644339 | orchestrator | skipping: [testbed-node-5] 2026-02-09 02:59:56.644346 | orchestrator | 2026-02-09 02:59:56.644352 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 02:59:56.644359 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:56.644366 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:56.644372 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:56.644378 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:56.644384 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:56.644390 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 02:59:56.644396 | orchestrator | 2026-02-09 02:59:56.644401 | orchestrator | 2026-02-09 02:59:56.644407 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 02:59:56.644413 | orchestrator | Monday 09 February 2026 02:59:56 +0000 (0:00:00.040) 0:00:06.264 ******* 2026-02-09 02:59:56.644422 | orchestrator | =============================================================================== 2026-02-09 02:59:56.644428 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.44s 2026-02-09 02:59:56.644434 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.85s 2026-02-09 02:59:56.644440 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.75s 2026-02-09 02:59:57.238703 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-09 03:00:09.437244 | orchestrator | 2026-02-09 03:00:09 | INFO  | Task a344ee84-ea63-4e2c-b969-fa901183a1d1 (wait-for-connection) was prepared for execution. 2026-02-09 03:00:09.437391 | orchestrator | 2026-02-09 03:00:09 | INFO  | It takes a moment until task a344ee84-ea63-4e2c-b969-fa901183a1d1 (wait-for-connection) has been started and output is visible here. 2026-02-09 03:00:25.766191 | orchestrator | 2026-02-09 03:00:25.766282 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-09 03:00:25.766292 | orchestrator | 2026-02-09 03:00:25.766299 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-09 03:00:25.766307 | orchestrator | Monday 09 February 2026 03:00:13 +0000 (0:00:00.231) 0:00:00.231 ******* 2026-02-09 03:00:25.766314 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:00:25.766322 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:00:25.766329 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:00:25.766336 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:00:25.766343 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:00:25.766350 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:00:25.766357 | orchestrator | 2026-02-09 03:00:25.766364 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:00:25.766371 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:00:25.766380 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:00:25.766387 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:00:25.766394 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:00:25.766401 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:00:25.766407 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:00:25.766415 | orchestrator | 2026-02-09 03:00:25.766422 | orchestrator | 2026-02-09 03:00:25.766428 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:00:25.766435 | orchestrator | Monday 09 February 2026 03:00:25 +0000 (0:00:11.520) 0:00:11.751 ******* 2026-02-09 03:00:25.766447 | orchestrator | =============================================================================== 2026-02-09 03:00:25.766458 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2026-02-09 03:00:26.137885 | orchestrator | + osism apply hddtemp 2026-02-09 03:00:38.176735 | orchestrator | 2026-02-09 03:00:38 | INFO  | Task 1279ff1e-32dd-47e8-b425-9a13dcea2e8c (hddtemp) was prepared for execution. 2026-02-09 03:00:38.176850 | orchestrator | 2026-02-09 03:00:38 | INFO  | It takes a moment until task 1279ff1e-32dd-47e8-b425-9a13dcea2e8c (hddtemp) has been started and output is visible here. 2026-02-09 03:01:06.790959 | orchestrator | 2026-02-09 03:01:06.791040 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-09 03:01:06.791046 | orchestrator | 2026-02-09 03:01:06.791051 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-09 03:01:06.791055 | orchestrator | Monday 09 February 2026 03:00:42 +0000 (0:00:00.268) 0:00:00.268 ******* 2026-02-09 03:01:06.791060 | orchestrator | ok: [testbed-manager] 2026-02-09 03:01:06.791065 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:01:06.791070 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:01:06.791074 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:01:06.791081 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:01:06.791088 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:01:06.791095 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:01:06.791102 | orchestrator | 2026-02-09 03:01:06.791109 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-09 03:01:06.791115 | orchestrator | Monday 09 February 2026 03:00:43 +0000 (0:00:00.721) 0:00:00.990 ******* 2026-02-09 03:01:06.791123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:01:06.791145 | orchestrator | 2026-02-09 03:01:06.791149 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-09 03:01:06.791153 | orchestrator | Monday 09 February 2026 03:00:44 +0000 (0:00:01.251) 0:00:02.241 ******* 2026-02-09 03:01:06.791159 | orchestrator | ok: [testbed-manager] 2026-02-09 03:01:06.791165 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:01:06.791171 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:01:06.791177 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:01:06.791183 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:01:06.791187 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:01:06.791191 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:01:06.791195 | orchestrator | 2026-02-09 03:01:06.791199 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-09 03:01:06.791219 | orchestrator | Monday 09 February 2026 03:00:46 +0000 (0:00:01.821) 0:00:04.062 ******* 2026-02-09 03:01:06.791230 | orchestrator | changed: [testbed-manager] 2026-02-09 03:01:06.791237 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:01:06.791243 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:01:06.791249 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:01:06.791255 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:01:06.791261 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:01:06.791266 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:01:06.791272 | orchestrator | 2026-02-09 03:01:06.791278 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-09 03:01:06.791285 | orchestrator | Monday 09 February 2026 03:00:47 +0000 (0:00:01.255) 0:00:05.318 ******* 2026-02-09 03:01:06.791291 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:01:06.791297 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:01:06.791304 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:01:06.791310 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:01:06.791317 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:01:06.791321 | orchestrator | ok: [testbed-manager] 2026-02-09 03:01:06.791325 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:01:06.791328 | orchestrator | 2026-02-09 03:01:06.791332 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-09 03:01:06.791336 | orchestrator | Monday 09 February 2026 03:00:49 +0000 (0:00:02.120) 0:00:07.439 ******* 2026-02-09 03:01:06.791340 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:01:06.791344 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:01:06.791348 | orchestrator | changed: [testbed-manager] 2026-02-09 03:01:06.791351 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:01:06.791355 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:01:06.791359 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:01:06.791363 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:01:06.791367 | orchestrator | 2026-02-09 03:01:06.791371 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-09 03:01:06.791374 | orchestrator | Monday 09 February 2026 03:00:50 +0000 (0:00:00.851) 0:00:08.290 ******* 2026-02-09 03:01:06.791378 | orchestrator | changed: [testbed-manager] 2026-02-09 03:01:06.791382 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:01:06.791386 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:01:06.791390 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:01:06.791394 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:01:06.791398 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:01:06.791401 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:01:06.791406 | orchestrator | 2026-02-09 03:01:06.791410 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-09 03:01:06.791414 | orchestrator | Monday 09 February 2026 03:01:02 +0000 (0:00:12.385) 0:00:20.676 ******* 2026-02-09 03:01:06.791418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:01:06.791427 | orchestrator | 2026-02-09 03:01:06.791431 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-09 03:01:06.791435 | orchestrator | Monday 09 February 2026 03:01:04 +0000 (0:00:01.331) 0:00:22.007 ******* 2026-02-09 03:01:06.791439 | orchestrator | changed: [testbed-manager] 2026-02-09 03:01:06.791445 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:01:06.791452 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:01:06.791458 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:01:06.791464 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:01:06.791470 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:01:06.791476 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:01:06.791482 | orchestrator | 2026-02-09 03:01:06.791487 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:01:06.791493 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:01:06.791516 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:01:06.791522 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:01:06.791527 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:01:06.791532 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:01:06.791537 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:01:06.791541 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:01:06.791546 | orchestrator | 2026-02-09 03:01:06.791550 | orchestrator | 2026-02-09 03:01:06.791555 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:01:06.791562 | orchestrator | Monday 09 February 2026 03:01:06 +0000 (0:00:01.999) 0:00:24.007 ******* 2026-02-09 03:01:06.791569 | orchestrator | =============================================================================== 2026-02-09 03:01:06.791575 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.39s 2026-02-09 03:01:06.791582 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.12s 2026-02-09 03:01:06.791588 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.00s 2026-02-09 03:01:06.791599 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.82s 2026-02-09 03:01:06.791606 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.33s 2026-02-09 03:01:06.791612 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.26s 2026-02-09 03:01:06.791619 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.25s 2026-02-09 03:01:06.791625 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.85s 2026-02-09 03:01:06.791630 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.72s 2026-02-09 03:01:07.122702 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-09 03:01:07.173882 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-09 03:01:07.174012 | orchestrator | + sudo systemctl restart manager.service 2026-02-09 03:01:21.425939 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-09 03:01:21.426094 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-09 03:01:21.426123 | orchestrator | + local max_attempts=60 2026-02-09 03:01:21.426133 | orchestrator | + local name=ceph-ansible 2026-02-09 03:01:21.426150 | orchestrator | + local attempt_num=1 2026-02-09 03:01:21.426160 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:01:21.464978 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-09 03:01:21.465098 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:01:21.465117 | orchestrator | + sleep 5 2026-02-09 03:01:26.469361 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:01:26.500706 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-09 03:01:26.500805 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:01:26.500820 | orchestrator | + sleep 5 2026-02-09 03:01:31.503720 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:01:31.541305 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-09 03:01:31.541385 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:01:31.541394 | orchestrator | + sleep 5 2026-02-09 03:01:36.544190 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:01:36.575224 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-09 03:01:36.575320 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:01:36.575334 | orchestrator | + sleep 5 2026-02-09 03:01:41.579152 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:01:41.608252 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-09 03:01:41.608362 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:01:41.608380 | orchestrator | + sleep 5 2026-02-09 03:01:46.612243 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:01:46.655222 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-09 03:01:46.655306 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:01:46.655313 | orchestrator | + sleep 5 2026-02-09 03:01:51.661725 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:01:51.691998 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-09 03:01:51.692080 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:01:51.692092 | orchestrator | + sleep 5 2026-02-09 03:01:56.695803 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:01:56.738370 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-09 03:01:56.738549 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:01:56.738565 | orchestrator | + sleep 5 2026-02-09 03:02:01.742366 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:02:01.786383 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-09 03:02:01.786479 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:02:01.786488 | orchestrator | + sleep 5 2026-02-09 03:02:06.790231 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:02:06.832279 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-09 03:02:06.832373 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:02:06.832383 | orchestrator | + sleep 5 2026-02-09 03:02:11.838542 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:02:11.887335 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-09 03:02:11.887405 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:02:11.887411 | orchestrator | + sleep 5 2026-02-09 03:02:16.893749 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:02:16.932455 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-09 03:02:16.932541 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:02:16.932551 | orchestrator | + sleep 5 2026-02-09 03:02:21.938992 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:02:21.974634 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-09 03:02:21.974768 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-09 03:02:21.974783 | orchestrator | + sleep 5 2026-02-09 03:02:26.979656 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-09 03:02:27.027221 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-09 03:02:27.027313 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-09 03:02:27.027323 | orchestrator | + local max_attempts=60 2026-02-09 03:02:27.027331 | orchestrator | + local name=kolla-ansible 2026-02-09 03:02:27.027337 | orchestrator | + local attempt_num=1 2026-02-09 03:02:27.027344 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-09 03:02:27.062518 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-09 03:02:27.062606 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-09 03:02:27.062644 | orchestrator | + local max_attempts=60 2026-02-09 03:02:27.062653 | orchestrator | + local name=osism-ansible 2026-02-09 03:02:27.062660 | orchestrator | + local attempt_num=1 2026-02-09 03:02:27.062668 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-09 03:02:27.094658 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-09 03:02:27.094776 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-09 03:02:27.094784 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-09 03:02:27.264637 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-09 03:02:27.430983 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-09 03:02:27.592427 | orchestrator | ARA in osism-ansible already disabled. 2026-02-09 03:02:27.744245 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-09 03:02:27.745064 | orchestrator | + osism apply gather-facts 2026-02-09 03:02:39.977496 | orchestrator | 2026-02-09 03:02:39 | INFO  | Task d3cd481f-0016-4d6e-91da-20214e7980fe (gather-facts) was prepared for execution. 2026-02-09 03:02:39.977575 | orchestrator | 2026-02-09 03:02:39 | INFO  | It takes a moment until task d3cd481f-0016-4d6e-91da-20214e7980fe (gather-facts) has been started and output is visible here. 2026-02-09 03:02:52.793626 | orchestrator | 2026-02-09 03:02:52.793783 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-09 03:02:52.793803 | orchestrator | 2026-02-09 03:02:52.793816 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-09 03:02:52.793828 | orchestrator | Monday 09 February 2026 03:02:44 +0000 (0:00:00.217) 0:00:00.217 ******* 2026-02-09 03:02:52.793840 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:02:52.793853 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:02:52.793864 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:02:52.793875 | orchestrator | ok: [testbed-manager] 2026-02-09 03:02:52.793886 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:02:52.793897 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:02:52.793908 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:02:52.793919 | orchestrator | 2026-02-09 03:02:52.793930 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-09 03:02:52.793941 | orchestrator | 2026-02-09 03:02:52.793952 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-09 03:02:52.793963 | orchestrator | Monday 09 February 2026 03:02:51 +0000 (0:00:07.594) 0:00:07.812 ******* 2026-02-09 03:02:52.793974 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:02:52.793986 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:02:52.793997 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:02:52.794008 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:02:52.794089 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:02:52.794112 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:02:52.794130 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:02:52.794195 | orchestrator | 2026-02-09 03:02:52.794217 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:02:52.794238 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:02:52.794256 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:02:52.794267 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:02:52.794278 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:02:52.794289 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:02:52.794300 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:02:52.794341 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:02:52.794353 | orchestrator | 2026-02-09 03:02:52.794364 | orchestrator | 2026-02-09 03:02:52.794376 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:02:52.794387 | orchestrator | Monday 09 February 2026 03:02:52 +0000 (0:00:00.560) 0:00:08.372 ******* 2026-02-09 03:02:52.794397 | orchestrator | =============================================================================== 2026-02-09 03:02:52.794408 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.59s 2026-02-09 03:02:52.794419 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-02-09 03:02:53.178491 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-09 03:02:53.192512 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-09 03:02:53.203284 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-09 03:02:53.215955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-09 03:02:53.228131 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-09 03:02:53.241362 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-09 03:02:53.261091 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-09 03:02:53.274965 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-09 03:02:53.286488 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-09 03:02:53.299189 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-09 03:02:53.312633 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-09 03:02:53.324435 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-09 03:02:53.341732 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-09 03:02:53.358234 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-09 03:02:53.370174 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-09 03:02:53.380060 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-09 03:02:53.391796 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-09 03:02:53.406863 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-09 03:02:53.419821 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-09 03:02:53.430224 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-09 03:02:53.447900 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-09 03:02:53.462564 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-09 03:02:53.474107 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-09 03:02:53.490106 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-09 03:02:53.713523 | orchestrator | ok: Runtime: 0:24:30.142313 2026-02-09 03:02:53.819022 | 2026-02-09 03:02:53.819157 | TASK [Deploy services] 2026-02-09 03:02:54.519107 | orchestrator | 2026-02-09 03:02:54.519371 | orchestrator | # DEPLOY SERVICES 2026-02-09 03:02:54.519397 | orchestrator | 2026-02-09 03:02:54.519412 | orchestrator | + set -e 2026-02-09 03:02:54.519424 | orchestrator | + echo 2026-02-09 03:02:54.519437 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-09 03:02:54.519450 | orchestrator | + echo 2026-02-09 03:02:54.519494 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 03:02:54.519516 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 03:02:54.519527 | orchestrator | ++ INTERACTIVE=false 2026-02-09 03:02:54.519535 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 03:02:54.519552 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 03:02:54.519563 | orchestrator | + source /opt/manager-vars.sh 2026-02-09 03:02:54.519578 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-09 03:02:54.519589 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-09 03:02:54.519605 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-09 03:02:54.519617 | orchestrator | ++ CEPH_VERSION=reef 2026-02-09 03:02:54.519630 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-09 03:02:54.519641 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-09 03:02:54.519656 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 03:02:54.519667 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 03:02:54.519678 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-09 03:02:54.519732 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-09 03:02:54.519746 | orchestrator | ++ export ARA=false 2026-02-09 03:02:54.519758 | orchestrator | ++ ARA=false 2026-02-09 03:02:54.519770 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-09 03:02:54.519781 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-09 03:02:54.519793 | orchestrator | ++ export TEMPEST=false 2026-02-09 03:02:54.519804 | orchestrator | ++ TEMPEST=false 2026-02-09 03:02:54.519830 | orchestrator | ++ export IS_ZUUL=true 2026-02-09 03:02:54.519838 | orchestrator | ++ IS_ZUUL=true 2026-02-09 03:02:54.519845 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 03:02:54.519852 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 03:02:54.519859 | orchestrator | ++ export EXTERNAL_API=false 2026-02-09 03:02:54.519866 | orchestrator | ++ EXTERNAL_API=false 2026-02-09 03:02:54.519873 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-09 03:02:54.519879 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-09 03:02:54.519886 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-09 03:02:54.519893 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-09 03:02:54.519899 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-09 03:02:54.519912 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-09 03:02:54.519919 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-09 03:02:54.528798 | orchestrator | + set -e 2026-02-09 03:02:54.528899 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 03:02:54.528913 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 03:02:54.528921 | orchestrator | ++ INTERACTIVE=false 2026-02-09 03:02:54.528928 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 03:02:54.528935 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 03:02:54.528941 | orchestrator | + source /opt/manager-vars.sh 2026-02-09 03:02:54.528948 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-09 03:02:54.528955 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-09 03:02:54.528962 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-09 03:02:54.528969 | orchestrator | ++ CEPH_VERSION=reef 2026-02-09 03:02:54.528976 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-09 03:02:54.528983 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-09 03:02:54.528989 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 03:02:54.528996 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 03:02:54.529003 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-09 03:02:54.529010 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-09 03:02:54.529017 | orchestrator | ++ export ARA=false 2026-02-09 03:02:54.529024 | orchestrator | ++ ARA=false 2026-02-09 03:02:54.529031 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-09 03:02:54.529038 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-09 03:02:54.529045 | orchestrator | ++ export TEMPEST=false 2026-02-09 03:02:54.529055 | orchestrator | ++ TEMPEST=false 2026-02-09 03:02:54.529063 | orchestrator | ++ export IS_ZUUL=true 2026-02-09 03:02:54.529069 | orchestrator | ++ IS_ZUUL=true 2026-02-09 03:02:54.529076 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 03:02:54.529083 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 03:02:54.529089 | orchestrator | ++ export EXTERNAL_API=false 2026-02-09 03:02:54.529096 | orchestrator | ++ EXTERNAL_API=false 2026-02-09 03:02:54.529103 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-09 03:02:54.529109 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-09 03:02:54.529126 | orchestrator | 2026-02-09 03:02:54.529134 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-09 03:02:54.529167 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-09 03:02:54.529174 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-09 03:02:54.529180 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-09 03:02:54.529187 | orchestrator | + echo 2026-02-09 03:02:54.529193 | orchestrator | + echo '# PULL IMAGES' 2026-02-09 03:02:54.529200 | orchestrator | # PULL IMAGES 2026-02-09 03:02:54.529748 | orchestrator | 2026-02-09 03:02:54.529766 | orchestrator | + echo 2026-02-09 03:02:54.530356 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-09 03:02:54.593759 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-09 03:02:54.593846 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-09 03:02:56.569759 | orchestrator | 2026-02-09 03:02:56 | INFO  | Trying to run play pull-images in environment custom 2026-02-09 03:03:06.829819 | orchestrator | 2026-02-09 03:03:06 | INFO  | Task 557ddc57-c798-49d6-ae35-42493d605523 (pull-images) was prepared for execution. 2026-02-09 03:03:06.829935 | orchestrator | 2026-02-09 03:03:06 | INFO  | Task 557ddc57-c798-49d6-ae35-42493d605523 is running in background. No more output. Check ARA for logs. 2026-02-09 03:03:07.188240 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-09 03:03:19.337832 | orchestrator | 2026-02-09 03:03:19 | INFO  | Task bfa74a0c-d062-40fc-8059-b5bec701ccd3 (cgit) was prepared for execution. 2026-02-09 03:03:19.337928 | orchestrator | 2026-02-09 03:03:19 | INFO  | Task bfa74a0c-d062-40fc-8059-b5bec701ccd3 is running in background. No more output. Check ARA for logs. 2026-02-09 03:03:32.755934 | orchestrator | 2026-02-09 03:03:32 | INFO  | Task ea5037fb-c86d-4bf8-8434-9eb516d10c40 (dotfiles) was prepared for execution. 2026-02-09 03:03:32.756117 | orchestrator | 2026-02-09 03:03:32 | INFO  | Task ea5037fb-c86d-4bf8-8434-9eb516d10c40 is running in background. No more output. Check ARA for logs. 2026-02-09 03:03:45.362829 | orchestrator | 2026-02-09 03:03:45 | INFO  | Task e1f04943-7929-4b52-82f8-b5ee37b21cfe (homer) was prepared for execution. 2026-02-09 03:03:45.362948 | orchestrator | 2026-02-09 03:03:45 | INFO  | Task e1f04943-7929-4b52-82f8-b5ee37b21cfe is running in background. No more output. Check ARA for logs. 2026-02-09 03:03:58.097868 | orchestrator | 2026-02-09 03:03:58 | INFO  | Task fee72c7f-504b-41a0-bbce-3f29be163a6b (phpmyadmin) was prepared for execution. 2026-02-09 03:03:58.097948 | orchestrator | 2026-02-09 03:03:58 | INFO  | Task fee72c7f-504b-41a0-bbce-3f29be163a6b is running in background. No more output. Check ARA for logs. 2026-02-09 03:04:10.632784 | orchestrator | 2026-02-09 03:04:10 | INFO  | Task 677526a4-811c-4ca8-83bf-61604ca1d0a2 (sosreport) was prepared for execution. 2026-02-09 03:04:10.632855 | orchestrator | 2026-02-09 03:04:10 | INFO  | Task 677526a4-811c-4ca8-83bf-61604ca1d0a2 is running in background. No more output. Check ARA for logs. 2026-02-09 03:04:11.025788 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-09 03:04:11.043915 | orchestrator | + set -e 2026-02-09 03:04:11.044014 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 03:04:11.044029 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 03:04:11.044040 | orchestrator | ++ INTERACTIVE=false 2026-02-09 03:04:11.044052 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 03:04:11.044062 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 03:04:11.044068 | orchestrator | + source /opt/manager-vars.sh 2026-02-09 03:04:11.044074 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-09 03:04:11.044080 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-09 03:04:11.044087 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-09 03:04:11.044095 | orchestrator | ++ CEPH_VERSION=reef 2026-02-09 03:04:11.044102 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-09 03:04:11.044107 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-09 03:04:11.044113 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 03:04:11.044119 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 03:04:11.044124 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-09 03:04:11.044130 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-09 03:04:11.044135 | orchestrator | ++ export ARA=false 2026-02-09 03:04:11.044141 | orchestrator | ++ ARA=false 2026-02-09 03:04:11.044147 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-09 03:04:11.044177 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-09 03:04:11.044183 | orchestrator | ++ export TEMPEST=false 2026-02-09 03:04:11.044189 | orchestrator | ++ TEMPEST=false 2026-02-09 03:04:11.044194 | orchestrator | ++ export IS_ZUUL=true 2026-02-09 03:04:11.044199 | orchestrator | ++ IS_ZUUL=true 2026-02-09 03:04:11.044217 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 03:04:11.044227 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 03:04:11.044233 | orchestrator | ++ export EXTERNAL_API=false 2026-02-09 03:04:11.044238 | orchestrator | ++ EXTERNAL_API=false 2026-02-09 03:04:11.044243 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-09 03:04:11.044249 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-09 03:04:11.044254 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-09 03:04:11.044260 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-09 03:04:11.044265 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-09 03:04:11.044271 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-09 03:04:11.045128 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-09 03:04:11.151139 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-09 03:04:11.151212 | orchestrator | + osism apply frr 2026-02-09 03:04:24.199160 | orchestrator | 2026-02-09 03:04:24 | INFO  | Task 46828dd5-e6e2-4272-b9ae-b0636871f66e (frr) was prepared for execution. 2026-02-09 03:04:24.199259 | orchestrator | 2026-02-09 03:04:24 | INFO  | It takes a moment until task 46828dd5-e6e2-4272-b9ae-b0636871f66e (frr) has been started and output is visible here. 2026-02-09 03:05:02.750904 | orchestrator | 2026-02-09 03:05:02.750983 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-09 03:05:02.750991 | orchestrator | 2026-02-09 03:05:02.750998 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-09 03:05:02.751010 | orchestrator | Monday 09 February 2026 03:04:31 +0000 (0:00:01.157) 0:00:01.157 ******* 2026-02-09 03:05:02.751018 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-09 03:05:02.751025 | orchestrator | 2026-02-09 03:05:02.751032 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-09 03:05:02.751037 | orchestrator | Monday 09 February 2026 03:04:32 +0000 (0:00:01.092) 0:00:02.250 ******* 2026-02-09 03:05:02.751043 | orchestrator | changed: [testbed-manager] 2026-02-09 03:05:02.751052 | orchestrator | 2026-02-09 03:05:02.751059 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-09 03:05:02.751068 | orchestrator | Monday 09 February 2026 03:04:35 +0000 (0:00:02.640) 0:00:04.890 ******* 2026-02-09 03:05:02.751075 | orchestrator | changed: [testbed-manager] 2026-02-09 03:05:02.751081 | orchestrator | 2026-02-09 03:05:02.751088 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-09 03:05:02.751095 | orchestrator | Monday 09 February 2026 03:04:49 +0000 (0:00:14.855) 0:00:19.746 ******* 2026-02-09 03:05:02.751101 | orchestrator | ok: [testbed-manager] 2026-02-09 03:05:02.751106 | orchestrator | 2026-02-09 03:05:02.751110 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-09 03:05:02.751114 | orchestrator | Monday 09 February 2026 03:04:51 +0000 (0:00:01.205) 0:00:20.951 ******* 2026-02-09 03:05:02.751118 | orchestrator | changed: [testbed-manager] 2026-02-09 03:05:02.751123 | orchestrator | 2026-02-09 03:05:02.751127 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-09 03:05:02.751130 | orchestrator | Monday 09 February 2026 03:04:52 +0000 (0:00:01.126) 0:00:22.078 ******* 2026-02-09 03:05:02.751134 | orchestrator | ok: [testbed-manager] 2026-02-09 03:05:02.751138 | orchestrator | 2026-02-09 03:05:02.751142 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-09 03:05:02.751147 | orchestrator | Monday 09 February 2026 03:04:53 +0000 (0:00:01.327) 0:00:23.406 ******* 2026-02-09 03:05:02.751151 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:05:02.751154 | orchestrator | 2026-02-09 03:05:02.751158 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-09 03:05:02.751162 | orchestrator | Monday 09 February 2026 03:04:53 +0000 (0:00:00.182) 0:00:23.588 ******* 2026-02-09 03:05:02.751187 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:05:02.751194 | orchestrator | 2026-02-09 03:05:02.751201 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-09 03:05:02.751207 | orchestrator | Monday 09 February 2026 03:04:53 +0000 (0:00:00.184) 0:00:23.772 ******* 2026-02-09 03:05:02.751213 | orchestrator | changed: [testbed-manager] 2026-02-09 03:05:02.751218 | orchestrator | 2026-02-09 03:05:02.751224 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-09 03:05:02.751230 | orchestrator | Monday 09 February 2026 03:04:55 +0000 (0:00:01.170) 0:00:24.942 ******* 2026-02-09 03:05:02.751236 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-09 03:05:02.751241 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-09 03:05:02.751248 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-09 03:05:02.751254 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-09 03:05:02.751261 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-09 03:05:02.751267 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-09 03:05:02.751273 | orchestrator | 2026-02-09 03:05:02.751279 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-09 03:05:02.751285 | orchestrator | Monday 09 February 2026 03:04:58 +0000 (0:00:03.532) 0:00:28.475 ******* 2026-02-09 03:05:02.751291 | orchestrator | ok: [testbed-manager] 2026-02-09 03:05:02.751297 | orchestrator | 2026-02-09 03:05:02.751303 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-09 03:05:02.751309 | orchestrator | Monday 09 February 2026 03:05:00 +0000 (0:00:01.859) 0:00:30.334 ******* 2026-02-09 03:05:02.751315 | orchestrator | changed: [testbed-manager] 2026-02-09 03:05:02.751321 | orchestrator | 2026-02-09 03:05:02.751327 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:05:02.751333 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:05:02.751340 | orchestrator | 2026-02-09 03:05:02.751346 | orchestrator | 2026-02-09 03:05:02.751359 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:05:02.751365 | orchestrator | Monday 09 February 2026 03:05:02 +0000 (0:00:01.805) 0:00:32.139 ******* 2026-02-09 03:05:02.751370 | orchestrator | =============================================================================== 2026-02-09 03:05:02.751376 | orchestrator | osism.services.frr : Install frr package ------------------------------- 14.86s 2026-02-09 03:05:02.751383 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.53s 2026-02-09 03:05:02.751389 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.64s 2026-02-09 03:05:02.751395 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.86s 2026-02-09 03:05:02.751401 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.81s 2026-02-09 03:05:02.751420 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.33s 2026-02-09 03:05:02.751427 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.21s 2026-02-09 03:05:02.751433 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.17s 2026-02-09 03:05:02.751439 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.13s 2026-02-09 03:05:02.751445 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.09s 2026-02-09 03:05:02.751451 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.18s 2026-02-09 03:05:02.751457 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.18s 2026-02-09 03:05:03.302907 | orchestrator | + osism apply kubernetes 2026-02-09 03:05:05.964055 | orchestrator | 2026-02-09 03:05:05 | INFO  | Task 2291e880-9b07-475c-99f1-e2ccac895730 (kubernetes) was prepared for execution. 2026-02-09 03:05:05.964140 | orchestrator | 2026-02-09 03:05:05 | INFO  | It takes a moment until task 2291e880-9b07-475c-99f1-e2ccac895730 (kubernetes) has been started and output is visible here. 2026-02-09 03:05:31.415000 | orchestrator | 2026-02-09 03:05:31.415101 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-09 03:05:31.415120 | orchestrator | 2026-02-09 03:05:31.415128 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-09 03:05:31.415136 | orchestrator | Monday 09 February 2026 03:05:11 +0000 (0:00:00.191) 0:00:00.191 ******* 2026-02-09 03:05:31.415142 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:05:31.415150 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:05:31.415156 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:05:31.415163 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:05:31.415169 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:05:31.415176 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:05:31.415187 | orchestrator | 2026-02-09 03:05:31.415209 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-09 03:05:31.415219 | orchestrator | Monday 09 February 2026 03:05:12 +0000 (0:00:00.783) 0:00:00.974 ******* 2026-02-09 03:05:31.415239 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:05:31.415250 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:05:31.415260 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:05:31.415269 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:05:31.415279 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:05:31.415289 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:05:31.415300 | orchestrator | 2026-02-09 03:05:31.415310 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-09 03:05:31.415322 | orchestrator | Monday 09 February 2026 03:05:13 +0000 (0:00:00.712) 0:00:01.687 ******* 2026-02-09 03:05:31.415333 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:05:31.415343 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:05:31.415354 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:05:31.415362 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:05:31.415369 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:05:31.415375 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:05:31.415382 | orchestrator | 2026-02-09 03:05:31.415388 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-09 03:05:31.415395 | orchestrator | Monday 09 February 2026 03:05:13 +0000 (0:00:00.777) 0:00:02.464 ******* 2026-02-09 03:05:31.415401 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:05:31.415408 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:05:31.415414 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:05:31.415424 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:05:31.415430 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:05:31.415436 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:05:31.415442 | orchestrator | 2026-02-09 03:05:31.415449 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-09 03:05:31.415455 | orchestrator | Monday 09 February 2026 03:05:15 +0000 (0:00:01.360) 0:00:03.825 ******* 2026-02-09 03:05:31.415461 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:05:31.415467 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:05:31.415474 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:05:31.415480 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:05:31.415486 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:05:31.415492 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:05:31.415498 | orchestrator | 2026-02-09 03:05:31.415504 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-09 03:05:31.415510 | orchestrator | Monday 09 February 2026 03:05:16 +0000 (0:00:01.122) 0:00:04.947 ******* 2026-02-09 03:05:31.415516 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:05:31.415541 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:05:31.415548 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:05:31.415556 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:05:31.415563 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:05:31.415571 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:05:31.415578 | orchestrator | 2026-02-09 03:05:31.415592 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-09 03:05:31.415599 | orchestrator | Monday 09 February 2026 03:05:17 +0000 (0:00:01.035) 0:00:05.982 ******* 2026-02-09 03:05:31.415607 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:05:31.415614 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:05:31.415622 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:05:31.415630 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:05:31.415637 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:05:31.415645 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:05:31.415651 | orchestrator | 2026-02-09 03:05:31.415658 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-09 03:05:31.415664 | orchestrator | Monday 09 February 2026 03:05:18 +0000 (0:00:00.855) 0:00:06.838 ******* 2026-02-09 03:05:31.415670 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:05:31.415676 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:05:31.415682 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:05:31.415688 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:05:31.415695 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:05:31.415701 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:05:31.415707 | orchestrator | 2026-02-09 03:05:31.415713 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-09 03:05:31.415719 | orchestrator | Monday 09 February 2026 03:05:18 +0000 (0:00:00.674) 0:00:07.512 ******* 2026-02-09 03:05:31.415725 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 03:05:31.415732 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 03:05:31.415738 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:05:31.415744 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 03:05:31.415755 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 03:05:31.415765 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:05:31.415794 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 03:05:31.415804 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 03:05:31.415813 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:05:31.415823 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 03:05:31.415852 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 03:05:31.415863 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:05:31.415873 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 03:05:31.415884 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 03:05:31.415892 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:05:31.415898 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 03:05:31.415905 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 03:05:31.415911 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:05:31.415917 | orchestrator | 2026-02-09 03:05:31.415923 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-09 03:05:31.415929 | orchestrator | Monday 09 February 2026 03:05:19 +0000 (0:00:00.722) 0:00:08.234 ******* 2026-02-09 03:05:31.415935 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:05:31.415941 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:05:31.415947 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:05:31.415960 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:05:31.415966 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:05:31.415972 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:05:31.415978 | orchestrator | 2026-02-09 03:05:31.415984 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-09 03:05:31.415992 | orchestrator | Monday 09 February 2026 03:05:21 +0000 (0:00:01.306) 0:00:09.541 ******* 2026-02-09 03:05:31.415998 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:05:31.416004 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:05:31.416010 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:05:31.416016 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:05:31.416022 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:05:31.416029 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:05:31.416035 | orchestrator | 2026-02-09 03:05:31.416041 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-09 03:05:31.416047 | orchestrator | Monday 09 February 2026 03:05:21 +0000 (0:00:00.970) 0:00:10.512 ******* 2026-02-09 03:05:31.416053 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:05:31.416059 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:05:31.416065 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:05:31.416071 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:05:31.416077 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:05:31.416083 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:05:31.416089 | orchestrator | 2026-02-09 03:05:31.416096 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-09 03:05:31.416102 | orchestrator | Monday 09 February 2026 03:05:27 +0000 (0:00:05.421) 0:00:15.933 ******* 2026-02-09 03:05:31.416108 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:05:31.416120 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:05:31.416130 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:05:31.416137 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:05:31.416144 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:05:31.416150 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:05:31.416156 | orchestrator | 2026-02-09 03:05:31.416162 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-09 03:05:31.416168 | orchestrator | Monday 09 February 2026 03:05:28 +0000 (0:00:01.000) 0:00:16.933 ******* 2026-02-09 03:05:31.416175 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:05:31.416185 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:05:31.416195 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:05:31.416205 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:05:31.416215 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:05:31.416226 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:05:31.416237 | orchestrator | 2026-02-09 03:05:31.416247 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-09 03:05:31.416259 | orchestrator | Monday 09 February 2026 03:05:29 +0000 (0:00:01.404) 0:00:18.337 ******* 2026-02-09 03:05:31.416265 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:05:31.416271 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:05:31.416277 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:05:31.416283 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:05:31.416289 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:05:31.416295 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:05:31.416301 | orchestrator | 2026-02-09 03:05:31.416307 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-09 03:05:31.416314 | orchestrator | Monday 09 February 2026 03:05:30 +0000 (0:00:00.720) 0:00:19.058 ******* 2026-02-09 03:05:31.416320 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-09 03:05:31.416333 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-09 03:05:31.416347 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:05:31.416362 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-09 03:05:31.416379 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-09 03:05:31.416388 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:05:31.416397 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-09 03:05:31.416407 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-09 03:05:31.416416 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:05:31.416425 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-09 03:05:31.416434 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-09 03:05:31.416442 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:05:31.416451 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-09 03:05:31.416459 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-09 03:05:31.416467 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:05:31.416476 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-09 03:05:31.416484 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-09 03:05:31.416494 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:05:31.416504 | orchestrator | 2026-02-09 03:05:31.416513 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-09 03:05:31.416533 | orchestrator | Monday 09 February 2026 03:05:31 +0000 (0:00:00.841) 0:00:19.899 ******* 2026-02-09 03:06:46.630310 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:06:46.630388 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:06:46.630396 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:06:46.630402 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:06:46.630408 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:06:46.630413 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:06:46.630419 | orchestrator | 2026-02-09 03:06:46.630425 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-09 03:06:46.630432 | orchestrator | Monday 09 February 2026 03:05:31 +0000 (0:00:00.601) 0:00:20.500 ******* 2026-02-09 03:06:46.630438 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:06:46.630443 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:06:46.630449 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:06:46.630454 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:06:46.630459 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:06:46.630464 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:06:46.630470 | orchestrator | 2026-02-09 03:06:46.630475 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-09 03:06:46.630480 | orchestrator | 2026-02-09 03:06:46.630486 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-09 03:06:46.630492 | orchestrator | Monday 09 February 2026 03:05:33 +0000 (0:00:01.268) 0:00:21.769 ******* 2026-02-09 03:06:46.630497 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:06:46.630504 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:06:46.630509 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:06:46.630514 | orchestrator | 2026-02-09 03:06:46.630520 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-09 03:06:46.630525 | orchestrator | Monday 09 February 2026 03:05:34 +0000 (0:00:01.720) 0:00:23.489 ******* 2026-02-09 03:06:46.630531 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:06:46.630536 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:06:46.630541 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:06:46.630546 | orchestrator | 2026-02-09 03:06:46.630552 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-09 03:06:46.630557 | orchestrator | Monday 09 February 2026 03:05:37 +0000 (0:00:02.540) 0:00:26.030 ******* 2026-02-09 03:06:46.630562 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:06:46.630568 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:06:46.630573 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:06:46.630579 | orchestrator | 2026-02-09 03:06:46.630584 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-09 03:06:46.630589 | orchestrator | Monday 09 February 2026 03:05:38 +0000 (0:00:00.969) 0:00:27.000 ******* 2026-02-09 03:06:46.630611 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:06:46.630616 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:06:46.630622 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:06:46.630627 | orchestrator | 2026-02-09 03:06:46.630632 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-09 03:06:46.630638 | orchestrator | Monday 09 February 2026 03:05:39 +0000 (0:00:00.905) 0:00:27.906 ******* 2026-02-09 03:06:46.630643 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:06:46.630648 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:06:46.630653 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:06:46.630659 | orchestrator | 2026-02-09 03:06:46.630664 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-09 03:06:46.630681 | orchestrator | Monday 09 February 2026 03:05:39 +0000 (0:00:00.307) 0:00:28.213 ******* 2026-02-09 03:06:46.630686 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:06:46.630692 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:06:46.630697 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:06:46.630702 | orchestrator | 2026-02-09 03:06:46.630707 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-09 03:06:46.630712 | orchestrator | Monday 09 February 2026 03:05:40 +0000 (0:00:00.903) 0:00:29.116 ******* 2026-02-09 03:06:46.630717 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:06:46.630723 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:06:46.630728 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:06:46.630733 | orchestrator | 2026-02-09 03:06:46.630738 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-09 03:06:46.630743 | orchestrator | Monday 09 February 2026 03:05:41 +0000 (0:00:01.288) 0:00:30.405 ******* 2026-02-09 03:06:46.630749 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:06:46.630754 | orchestrator | 2026-02-09 03:06:46.630759 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-09 03:06:46.630764 | orchestrator | Monday 09 February 2026 03:05:42 +0000 (0:00:00.517) 0:00:30.923 ******* 2026-02-09 03:06:46.630770 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:06:46.630775 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:06:46.630780 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:06:46.630785 | orchestrator | 2026-02-09 03:06:46.630790 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-09 03:06:46.630795 | orchestrator | Monday 09 February 2026 03:05:43 +0000 (0:00:01.458) 0:00:32.382 ******* 2026-02-09 03:06:46.630801 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:06:46.630806 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:06:46.630811 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:06:46.630859 | orchestrator | 2026-02-09 03:06:46.630864 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-09 03:06:46.630869 | orchestrator | Monday 09 February 2026 03:05:44 +0000 (0:00:00.543) 0:00:32.925 ******* 2026-02-09 03:06:46.630874 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:06:46.630879 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:06:46.630884 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:06:46.630888 | orchestrator | 2026-02-09 03:06:46.630893 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-09 03:06:46.630898 | orchestrator | Monday 09 February 2026 03:05:45 +0000 (0:00:00.802) 0:00:33.728 ******* 2026-02-09 03:06:46.630903 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:06:46.630908 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:06:46.630913 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:06:46.630918 | orchestrator | 2026-02-09 03:06:46.630922 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-09 03:06:46.630939 | orchestrator | Monday 09 February 2026 03:05:46 +0000 (0:00:01.185) 0:00:34.914 ******* 2026-02-09 03:06:46.630945 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:06:46.630957 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:06:46.630961 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:06:46.630966 | orchestrator | 2026-02-09 03:06:46.630971 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-09 03:06:46.630976 | orchestrator | Monday 09 February 2026 03:05:46 +0000 (0:00:00.569) 0:00:35.483 ******* 2026-02-09 03:06:46.630981 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:06:46.630986 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:06:46.630990 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:06:46.630995 | orchestrator | 2026-02-09 03:06:46.631000 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-09 03:06:46.631005 | orchestrator | Monday 09 February 2026 03:05:47 +0000 (0:00:00.309) 0:00:35.792 ******* 2026-02-09 03:06:46.631010 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:06:46.631015 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:06:46.631019 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:06:46.631024 | orchestrator | 2026-02-09 03:06:46.631033 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-09 03:06:46.631038 | orchestrator | Monday 09 February 2026 03:05:48 +0000 (0:00:01.196) 0:00:36.989 ******* 2026-02-09 03:06:46.631043 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:06:46.631056 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:06:46.631061 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:06:46.631066 | orchestrator | 2026-02-09 03:06:46.631070 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-09 03:06:46.631075 | orchestrator | Monday 09 February 2026 03:05:51 +0000 (0:00:02.663) 0:00:39.653 ******* 2026-02-09 03:06:46.631080 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:06:46.631085 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:06:46.631096 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:06:46.631106 | orchestrator | 2026-02-09 03:06:46.631111 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-09 03:06:46.631116 | orchestrator | Monday 09 February 2026 03:05:51 +0000 (0:00:00.339) 0:00:39.992 ******* 2026-02-09 03:06:46.631121 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-09 03:06:46.631128 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-09 03:06:46.631133 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-09 03:06:46.631138 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-09 03:06:46.631143 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-09 03:06:46.631147 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-09 03:06:46.631152 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-09 03:06:46.631157 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-09 03:06:46.631162 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-09 03:06:46.631167 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-09 03:06:46.631172 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-09 03:06:46.631181 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-09 03:06:46.631186 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-09 03:06:46.631190 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-09 03:06:46.631195 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-09 03:06:46.631200 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:06:46.631205 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:06:46.631210 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:06:46.631215 | orchestrator | 2026-02-09 03:06:46.631223 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-09 03:06:46.631228 | orchestrator | Monday 09 February 2026 03:06:45 +0000 (0:00:53.862) 0:01:33.855 ******* 2026-02-09 03:06:46.631233 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:06:46.631237 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:06:46.631242 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:06:46.631247 | orchestrator | 2026-02-09 03:06:46.631252 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-09 03:06:46.631257 | orchestrator | Monday 09 February 2026 03:06:45 +0000 (0:00:00.328) 0:01:34.184 ******* 2026-02-09 03:06:46.631265 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:07:27.437625 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:07:27.437719 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:07:27.437731 | orchestrator | 2026-02-09 03:07:27.437741 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-09 03:07:27.437751 | orchestrator | Monday 09 February 2026 03:06:46 +0000 (0:00:00.953) 0:01:35.137 ******* 2026-02-09 03:07:27.437759 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:07:27.437767 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:07:27.437775 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:07:27.437783 | orchestrator | 2026-02-09 03:07:27.437791 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-09 03:07:27.437799 | orchestrator | Monday 09 February 2026 03:06:47 +0000 (0:00:01.129) 0:01:36.266 ******* 2026-02-09 03:07:27.437806 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:07:27.437814 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:07:27.437822 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:07:27.437830 | orchestrator | 2026-02-09 03:07:27.437880 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-09 03:07:27.437896 | orchestrator | Monday 09 February 2026 03:07:13 +0000 (0:00:25.319) 0:02:01.586 ******* 2026-02-09 03:07:27.437910 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:07:27.437923 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:07:27.437936 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:07:27.437944 | orchestrator | 2026-02-09 03:07:27.437951 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-09 03:07:27.437958 | orchestrator | Monday 09 February 2026 03:07:13 +0000 (0:00:00.644) 0:02:02.231 ******* 2026-02-09 03:07:27.437966 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:07:27.437974 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:07:27.437981 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:07:27.437988 | orchestrator | 2026-02-09 03:07:27.437995 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-09 03:07:27.438003 | orchestrator | Monday 09 February 2026 03:07:14 +0000 (0:00:00.632) 0:02:02.863 ******* 2026-02-09 03:07:27.438010 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:07:27.438091 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:07:27.438100 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:07:27.438107 | orchestrator | 2026-02-09 03:07:27.438114 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-09 03:07:27.438162 | orchestrator | Monday 09 February 2026 03:07:14 +0000 (0:00:00.617) 0:02:03.481 ******* 2026-02-09 03:07:27.438173 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:07:27.438181 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:07:27.438190 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:07:27.438199 | orchestrator | 2026-02-09 03:07:27.438208 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-09 03:07:27.438216 | orchestrator | Monday 09 February 2026 03:07:15 +0000 (0:00:00.785) 0:02:04.266 ******* 2026-02-09 03:07:27.438225 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:07:27.438234 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:07:27.438243 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:07:27.438251 | orchestrator | 2026-02-09 03:07:27.438260 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-09 03:07:27.438269 | orchestrator | Monday 09 February 2026 03:07:16 +0000 (0:00:00.314) 0:02:04.581 ******* 2026-02-09 03:07:27.438276 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:07:27.438284 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:07:27.438291 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:07:27.438298 | orchestrator | 2026-02-09 03:07:27.438305 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-09 03:07:27.438312 | orchestrator | Monday 09 February 2026 03:07:16 +0000 (0:00:00.615) 0:02:05.196 ******* 2026-02-09 03:07:27.438319 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:07:27.438326 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:07:27.438334 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:07:27.438341 | orchestrator | 2026-02-09 03:07:27.438348 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-09 03:07:27.438356 | orchestrator | Monday 09 February 2026 03:07:17 +0000 (0:00:00.643) 0:02:05.840 ******* 2026-02-09 03:07:27.438363 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:07:27.438370 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:07:27.438377 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:07:27.438384 | orchestrator | 2026-02-09 03:07:27.438392 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-09 03:07:27.438399 | orchestrator | Monday 09 February 2026 03:07:18 +0000 (0:00:01.094) 0:02:06.935 ******* 2026-02-09 03:07:27.438408 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:07:27.438416 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:07:27.438423 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:07:27.438430 | orchestrator | 2026-02-09 03:07:27.438438 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-09 03:07:27.438445 | orchestrator | Monday 09 February 2026 03:07:19 +0000 (0:00:00.848) 0:02:07.783 ******* 2026-02-09 03:07:27.438453 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:07:27.438460 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:07:27.438467 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:07:27.438474 | orchestrator | 2026-02-09 03:07:27.438481 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-09 03:07:27.438489 | orchestrator | Monday 09 February 2026 03:07:19 +0000 (0:00:00.327) 0:02:08.110 ******* 2026-02-09 03:07:27.438496 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:07:27.438503 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:07:27.438510 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:07:27.438517 | orchestrator | 2026-02-09 03:07:27.438524 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-09 03:07:27.438531 | orchestrator | Monday 09 February 2026 03:07:19 +0000 (0:00:00.294) 0:02:08.405 ******* 2026-02-09 03:07:27.438539 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:07:27.438546 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:07:27.438553 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:07:27.438560 | orchestrator | 2026-02-09 03:07:27.438567 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-09 03:07:27.438574 | orchestrator | Monday 09 February 2026 03:07:20 +0000 (0:00:00.590) 0:02:08.995 ******* 2026-02-09 03:07:27.438588 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:07:27.438595 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:07:27.438618 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:07:27.438626 | orchestrator | 2026-02-09 03:07:27.438634 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-09 03:07:27.438643 | orchestrator | Monday 09 February 2026 03:07:21 +0000 (0:00:00.862) 0:02:09.858 ******* 2026-02-09 03:07:27.438651 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-09 03:07:27.438659 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-09 03:07:27.438666 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-09 03:07:27.438673 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-09 03:07:27.438680 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-09 03:07:27.438687 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-09 03:07:27.438695 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-09 03:07:27.438702 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-09 03:07:27.438710 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-09 03:07:27.438717 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-09 03:07:27.438724 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-09 03:07:27.438731 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-09 03:07:27.438738 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-09 03:07:27.438746 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-09 03:07:27.438753 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-09 03:07:27.438760 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-09 03:07:27.438767 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-09 03:07:27.438774 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-09 03:07:27.438781 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-09 03:07:27.438788 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-09 03:07:27.438796 | orchestrator | 2026-02-09 03:07:27.438803 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-09 03:07:27.438810 | orchestrator | 2026-02-09 03:07:27.438817 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-09 03:07:27.438825 | orchestrator | Monday 09 February 2026 03:07:24 +0000 (0:00:02.887) 0:02:12.745 ******* 2026-02-09 03:07:27.438832 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:07:27.438857 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:07:27.438865 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:07:27.438873 | orchestrator | 2026-02-09 03:07:27.438893 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-09 03:07:27.438901 | orchestrator | Monday 09 February 2026 03:07:24 +0000 (0:00:00.393) 0:02:13.139 ******* 2026-02-09 03:07:27.438908 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:07:27.438915 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:07:27.438922 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:07:27.438934 | orchestrator | 2026-02-09 03:07:27.438942 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-09 03:07:27.438949 | orchestrator | Monday 09 February 2026 03:07:25 +0000 (0:00:00.918) 0:02:14.058 ******* 2026-02-09 03:07:27.438956 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:07:27.438963 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:07:27.438970 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:07:27.438977 | orchestrator | 2026-02-09 03:07:27.438984 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-09 03:07:27.438991 | orchestrator | Monday 09 February 2026 03:07:25 +0000 (0:00:00.325) 0:02:14.384 ******* 2026-02-09 03:07:27.438998 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:07:27.439005 | orchestrator | 2026-02-09 03:07:27.439012 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-09 03:07:27.439020 | orchestrator | Monday 09 February 2026 03:07:26 +0000 (0:00:00.529) 0:02:14.913 ******* 2026-02-09 03:07:27.439027 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:07:27.439034 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:07:27.439041 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:07:27.439048 | orchestrator | 2026-02-09 03:07:27.439055 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-09 03:07:27.439062 | orchestrator | Monday 09 February 2026 03:07:26 +0000 (0:00:00.529) 0:02:15.443 ******* 2026-02-09 03:07:27.439069 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:07:27.439076 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:07:27.439084 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:07:27.439091 | orchestrator | 2026-02-09 03:07:27.439098 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-09 03:07:27.439105 | orchestrator | Monday 09 February 2026 03:07:27 +0000 (0:00:00.317) 0:02:15.761 ******* 2026-02-09 03:07:27.439117 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:09:05.323167 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:09:05.323263 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:09:05.323272 | orchestrator | 2026-02-09 03:09:05.323280 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-09 03:09:05.323289 | orchestrator | Monday 09 February 2026 03:07:27 +0000 (0:00:00.306) 0:02:16.067 ******* 2026-02-09 03:09:05.323296 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:09:05.323303 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:09:05.323309 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:09:05.323316 | orchestrator | 2026-02-09 03:09:05.323323 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-09 03:09:05.323329 | orchestrator | Monday 09 February 2026 03:07:28 +0000 (0:00:00.603) 0:02:16.671 ******* 2026-02-09 03:09:05.323336 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:09:05.323343 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:09:05.323350 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:09:05.323357 | orchestrator | 2026-02-09 03:09:05.323363 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-09 03:09:05.323370 | orchestrator | Monday 09 February 2026 03:07:29 +0000 (0:00:01.390) 0:02:18.061 ******* 2026-02-09 03:09:05.323377 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:09:05.323384 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:09:05.323391 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:09:05.323396 | orchestrator | 2026-02-09 03:09:05.323403 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-09 03:09:05.323410 | orchestrator | Monday 09 February 2026 03:07:30 +0000 (0:00:01.212) 0:02:19.274 ******* 2026-02-09 03:09:05.323416 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:09:05.323423 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:09:05.323429 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:09:05.323437 | orchestrator | 2026-02-09 03:09:05.323443 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-09 03:09:05.323471 | orchestrator | 2026-02-09 03:09:05.323477 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-09 03:09:05.323484 | orchestrator | Monday 09 February 2026 03:07:40 +0000 (0:00:09.553) 0:02:28.827 ******* 2026-02-09 03:09:05.323491 | orchestrator | ok: [testbed-manager] 2026-02-09 03:09:05.323498 | orchestrator | 2026-02-09 03:09:05.323505 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-09 03:09:05.323511 | orchestrator | Monday 09 February 2026 03:07:41 +0000 (0:00:01.002) 0:02:29.830 ******* 2026-02-09 03:09:05.323518 | orchestrator | changed: [testbed-manager] 2026-02-09 03:09:05.323525 | orchestrator | 2026-02-09 03:09:05.323531 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-09 03:09:05.323537 | orchestrator | Monday 09 February 2026 03:07:41 +0000 (0:00:00.442) 0:02:30.272 ******* 2026-02-09 03:09:05.323544 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-09 03:09:05.323551 | orchestrator | 2026-02-09 03:09:05.323557 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-09 03:09:05.323564 | orchestrator | Monday 09 February 2026 03:07:42 +0000 (0:00:00.586) 0:02:30.859 ******* 2026-02-09 03:09:05.323570 | orchestrator | changed: [testbed-manager] 2026-02-09 03:09:05.323576 | orchestrator | 2026-02-09 03:09:05.323582 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-09 03:09:05.323589 | orchestrator | Monday 09 February 2026 03:07:43 +0000 (0:00:00.894) 0:02:31.753 ******* 2026-02-09 03:09:05.323596 | orchestrator | changed: [testbed-manager] 2026-02-09 03:09:05.323602 | orchestrator | 2026-02-09 03:09:05.323609 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-09 03:09:05.323616 | orchestrator | Monday 09 February 2026 03:07:43 +0000 (0:00:00.560) 0:02:32.313 ******* 2026-02-09 03:09:05.323622 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-09 03:09:05.323629 | orchestrator | 2026-02-09 03:09:05.323635 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-09 03:09:05.323641 | orchestrator | Monday 09 February 2026 03:07:45 +0000 (0:00:01.585) 0:02:33.899 ******* 2026-02-09 03:09:05.323648 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-09 03:09:05.323654 | orchestrator | 2026-02-09 03:09:05.323674 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-09 03:09:05.323686 | orchestrator | Monday 09 February 2026 03:07:46 +0000 (0:00:00.819) 0:02:34.719 ******* 2026-02-09 03:09:05.323692 | orchestrator | changed: [testbed-manager] 2026-02-09 03:09:05.323698 | orchestrator | 2026-02-09 03:09:05.323705 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-09 03:09:05.323714 | orchestrator | Monday 09 February 2026 03:07:46 +0000 (0:00:00.476) 0:02:35.195 ******* 2026-02-09 03:09:05.323724 | orchestrator | changed: [testbed-manager] 2026-02-09 03:09:05.323734 | orchestrator | 2026-02-09 03:09:05.323744 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-09 03:09:05.323755 | orchestrator | 2026-02-09 03:09:05.323765 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-09 03:09:05.323775 | orchestrator | Monday 09 February 2026 03:07:47 +0000 (0:00:00.453) 0:02:35.649 ******* 2026-02-09 03:09:05.323785 | orchestrator | ok: [testbed-manager] 2026-02-09 03:09:05.323795 | orchestrator | 2026-02-09 03:09:05.323804 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-09 03:09:05.323814 | orchestrator | Monday 09 February 2026 03:07:47 +0000 (0:00:00.375) 0:02:36.025 ******* 2026-02-09 03:09:05.323824 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-09 03:09:05.323834 | orchestrator | 2026-02-09 03:09:05.323844 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-09 03:09:05.323854 | orchestrator | Monday 09 February 2026 03:07:47 +0000 (0:00:00.271) 0:02:36.296 ******* 2026-02-09 03:09:05.323864 | orchestrator | ok: [testbed-manager] 2026-02-09 03:09:05.323873 | orchestrator | 2026-02-09 03:09:05.323889 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-09 03:09:05.323898 | orchestrator | Monday 09 February 2026 03:07:48 +0000 (0:00:00.845) 0:02:37.142 ******* 2026-02-09 03:09:05.323928 | orchestrator | ok: [testbed-manager] 2026-02-09 03:09:05.323936 | orchestrator | 2026-02-09 03:09:05.323962 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-09 03:09:05.323972 | orchestrator | Monday 09 February 2026 03:07:50 +0000 (0:00:01.634) 0:02:38.776 ******* 2026-02-09 03:09:05.323981 | orchestrator | changed: [testbed-manager] 2026-02-09 03:09:05.323991 | orchestrator | 2026-02-09 03:09:05.323999 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-09 03:09:05.324005 | orchestrator | Monday 09 February 2026 03:07:51 +0000 (0:00:00.802) 0:02:39.579 ******* 2026-02-09 03:09:05.324012 | orchestrator | ok: [testbed-manager] 2026-02-09 03:09:05.324018 | orchestrator | 2026-02-09 03:09:05.324025 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-09 03:09:05.324031 | orchestrator | Monday 09 February 2026 03:07:51 +0000 (0:00:00.477) 0:02:40.056 ******* 2026-02-09 03:09:05.324037 | orchestrator | changed: [testbed-manager] 2026-02-09 03:09:05.324043 | orchestrator | 2026-02-09 03:09:05.324050 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-09 03:09:05.324056 | orchestrator | Monday 09 February 2026 03:07:58 +0000 (0:00:07.291) 0:02:47.348 ******* 2026-02-09 03:09:05.324063 | orchestrator | changed: [testbed-manager] 2026-02-09 03:09:05.324069 | orchestrator | 2026-02-09 03:09:05.324075 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-09 03:09:05.324081 | orchestrator | Monday 09 February 2026 03:08:11 +0000 (0:00:12.797) 0:03:00.146 ******* 2026-02-09 03:09:05.324087 | orchestrator | ok: [testbed-manager] 2026-02-09 03:09:05.324093 | orchestrator | 2026-02-09 03:09:05.324099 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-09 03:09:05.324106 | orchestrator | 2026-02-09 03:09:05.324112 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-09 03:09:05.324119 | orchestrator | Monday 09 February 2026 03:08:12 +0000 (0:00:00.746) 0:03:00.892 ******* 2026-02-09 03:09:05.324125 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:09:05.324131 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:09:05.324137 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:09:05.324144 | orchestrator | 2026-02-09 03:09:05.324150 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-09 03:09:05.324156 | orchestrator | Monday 09 February 2026 03:08:12 +0000 (0:00:00.338) 0:03:01.231 ******* 2026-02-09 03:09:05.324162 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:09:05.324168 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:09:05.324175 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:09:05.324181 | orchestrator | 2026-02-09 03:09:05.324187 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-09 03:09:05.324193 | orchestrator | Monday 09 February 2026 03:08:13 +0000 (0:00:00.312) 0:03:01.544 ******* 2026-02-09 03:09:05.324200 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:09:05.324207 | orchestrator | 2026-02-09 03:09:05.324213 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-09 03:09:05.324219 | orchestrator | Monday 09 February 2026 03:08:13 +0000 (0:00:00.761) 0:03:02.306 ******* 2026-02-09 03:09:05.324225 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-09 03:09:05.324232 | orchestrator | 2026-02-09 03:09:05.324238 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-09 03:09:05.324244 | orchestrator | Monday 09 February 2026 03:08:14 +0000 (0:00:00.810) 0:03:03.116 ******* 2026-02-09 03:09:05.324250 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 03:09:05.324257 | orchestrator | 2026-02-09 03:09:05.324264 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-09 03:09:05.324276 | orchestrator | Monday 09 February 2026 03:08:15 +0000 (0:00:00.889) 0:03:04.005 ******* 2026-02-09 03:09:05.324282 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:09:05.324288 | orchestrator | 2026-02-09 03:09:05.324294 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-09 03:09:05.324300 | orchestrator | Monday 09 February 2026 03:08:15 +0000 (0:00:00.127) 0:03:04.133 ******* 2026-02-09 03:09:05.324306 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 03:09:05.324313 | orchestrator | 2026-02-09 03:09:05.324319 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-09 03:09:05.324326 | orchestrator | Monday 09 February 2026 03:08:16 +0000 (0:00:01.017) 0:03:05.150 ******* 2026-02-09 03:09:05.324332 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:09:05.324339 | orchestrator | 2026-02-09 03:09:05.324344 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-09 03:09:05.324351 | orchestrator | Monday 09 February 2026 03:08:16 +0000 (0:00:00.125) 0:03:05.275 ******* 2026-02-09 03:09:05.324357 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:09:05.324363 | orchestrator | 2026-02-09 03:09:05.324369 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-09 03:09:05.324375 | orchestrator | Monday 09 February 2026 03:08:16 +0000 (0:00:00.123) 0:03:05.399 ******* 2026-02-09 03:09:05.324382 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:09:05.324388 | orchestrator | 2026-02-09 03:09:05.324394 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-09 03:09:05.324407 | orchestrator | Monday 09 February 2026 03:08:17 +0000 (0:00:00.127) 0:03:05.527 ******* 2026-02-09 03:09:05.324413 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:09:05.324419 | orchestrator | 2026-02-09 03:09:05.324426 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-09 03:09:05.324432 | orchestrator | Monday 09 February 2026 03:08:17 +0000 (0:00:00.124) 0:03:05.651 ******* 2026-02-09 03:09:05.324438 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-09 03:09:05.324444 | orchestrator | 2026-02-09 03:09:05.324450 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-09 03:09:05.324457 | orchestrator | Monday 09 February 2026 03:08:22 +0000 (0:00:05.792) 0:03:11.444 ******* 2026-02-09 03:09:05.324463 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-09 03:09:05.324469 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-09 03:09:05.324483 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-09 03:09:28.954239 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-09 03:09:28.954344 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-09 03:09:28.954360 | orchestrator | 2026-02-09 03:09:28.954372 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-09 03:09:28.954384 | orchestrator | Monday 09 February 2026 03:09:05 +0000 (0:00:42.379) 0:03:53.824 ******* 2026-02-09 03:09:28.954396 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 03:09:28.954407 | orchestrator | 2026-02-09 03:09:28.954418 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-09 03:09:28.954430 | orchestrator | Monday 09 February 2026 03:09:06 +0000 (0:00:01.281) 0:03:55.105 ******* 2026-02-09 03:09:28.954441 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-09 03:09:28.954452 | orchestrator | 2026-02-09 03:09:28.954463 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-09 03:09:28.954474 | orchestrator | Monday 09 February 2026 03:09:08 +0000 (0:00:01.798) 0:03:56.904 ******* 2026-02-09 03:09:28.954485 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-09 03:09:28.954495 | orchestrator | 2026-02-09 03:09:28.954506 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-09 03:09:28.954518 | orchestrator | Monday 09 February 2026 03:09:09 +0000 (0:00:01.085) 0:03:57.989 ******* 2026-02-09 03:09:28.954556 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:09:28.954568 | orchestrator | 2026-02-09 03:09:28.954579 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-09 03:09:28.954590 | orchestrator | Monday 09 February 2026 03:09:09 +0000 (0:00:00.132) 0:03:58.122 ******* 2026-02-09 03:09:28.954601 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-09 03:09:28.954613 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-09 03:09:28.954624 | orchestrator | 2026-02-09 03:09:28.954635 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-09 03:09:28.954646 | orchestrator | Monday 09 February 2026 03:09:11 +0000 (0:00:01.864) 0:03:59.986 ******* 2026-02-09 03:09:28.954656 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:09:28.954667 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:09:28.954678 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:09:28.954689 | orchestrator | 2026-02-09 03:09:28.954700 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-09 03:09:28.954710 | orchestrator | Monday 09 February 2026 03:09:11 +0000 (0:00:00.306) 0:04:00.292 ******* 2026-02-09 03:09:28.954721 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:09:28.954732 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:09:28.954743 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:09:28.954754 | orchestrator | 2026-02-09 03:09:28.954767 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-09 03:09:28.954781 | orchestrator | 2026-02-09 03:09:28.954794 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-09 03:09:28.954806 | orchestrator | Monday 09 February 2026 03:09:12 +0000 (0:00:00.843) 0:04:01.136 ******* 2026-02-09 03:09:28.954819 | orchestrator | ok: [testbed-manager] 2026-02-09 03:09:28.954832 | orchestrator | 2026-02-09 03:09:28.954846 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-09 03:09:28.954860 | orchestrator | Monday 09 February 2026 03:09:12 +0000 (0:00:00.369) 0:04:01.505 ******* 2026-02-09 03:09:28.954872 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-09 03:09:28.954885 | orchestrator | 2026-02-09 03:09:28.954899 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-09 03:09:28.954913 | orchestrator | Monday 09 February 2026 03:09:13 +0000 (0:00:00.262) 0:04:01.768 ******* 2026-02-09 03:09:28.954958 | orchestrator | changed: [testbed-manager] 2026-02-09 03:09:28.954971 | orchestrator | 2026-02-09 03:09:28.954984 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-09 03:09:28.954997 | orchestrator | 2026-02-09 03:09:28.955010 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-09 03:09:28.955023 | orchestrator | Monday 09 February 2026 03:09:18 +0000 (0:00:05.355) 0:04:07.123 ******* 2026-02-09 03:09:28.955036 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:09:28.955049 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:09:28.955062 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:09:28.955076 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:09:28.955089 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:09:28.955102 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:09:28.955115 | orchestrator | 2026-02-09 03:09:28.955129 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-09 03:09:28.955143 | orchestrator | Monday 09 February 2026 03:09:19 +0000 (0:00:00.744) 0:04:07.867 ******* 2026-02-09 03:09:28.955157 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-09 03:09:28.955168 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-09 03:09:28.955179 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-09 03:09:28.955189 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-09 03:09:28.955210 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-09 03:09:28.955221 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-09 03:09:28.955232 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-09 03:09:28.955242 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-09 03:09:28.955253 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-09 03:09:28.955282 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-09 03:09:28.955294 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-09 03:09:28.955305 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-09 03:09:28.955316 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-09 03:09:28.955327 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-09 03:09:28.955338 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-09 03:09:28.955367 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-09 03:09:28.955379 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-09 03:09:28.955390 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-09 03:09:28.955401 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-09 03:09:28.955412 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-09 03:09:28.955422 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-09 03:09:28.955433 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-09 03:09:28.955444 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-09 03:09:28.955455 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-09 03:09:28.955465 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-09 03:09:28.955476 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-09 03:09:28.955487 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-09 03:09:28.955497 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-09 03:09:28.955508 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-09 03:09:28.955519 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-09 03:09:28.955529 | orchestrator | 2026-02-09 03:09:28.955540 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-09 03:09:28.955551 | orchestrator | Monday 09 February 2026 03:09:27 +0000 (0:00:08.330) 0:04:16.198 ******* 2026-02-09 03:09:28.955561 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:09:28.955572 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:09:28.955583 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:09:28.955594 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:09:28.955604 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:09:28.955615 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:09:28.955626 | orchestrator | 2026-02-09 03:09:28.955636 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-09 03:09:28.955647 | orchestrator | Monday 09 February 2026 03:09:28 +0000 (0:00:00.558) 0:04:16.756 ******* 2026-02-09 03:09:28.955658 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:09:28.955679 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:09:28.955690 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:09:28.955701 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:09:28.955711 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:09:28.955722 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:09:28.955732 | orchestrator | 2026-02-09 03:09:28.955743 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:09:28.955754 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:09:28.955767 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-09 03:09:28.955778 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-09 03:09:28.955789 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-09 03:09:28.955800 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-09 03:09:28.955811 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-09 03:09:28.955821 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-09 03:09:28.955832 | orchestrator | 2026-02-09 03:09:28.955843 | orchestrator | 2026-02-09 03:09:28.955854 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:09:28.955865 | orchestrator | Monday 09 February 2026 03:09:28 +0000 (0:00:00.689) 0:04:17.446 ******* 2026-02-09 03:09:28.955883 | orchestrator | =============================================================================== 2026-02-09 03:09:29.381145 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.86s 2026-02-09 03:09:29.381223 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.38s 2026-02-09 03:09:29.381232 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.32s 2026-02-09 03:09:29.381238 | orchestrator | kubectl : Install required packages ------------------------------------ 12.80s 2026-02-09 03:09:29.381244 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.55s 2026-02-09 03:09:29.381250 | orchestrator | Manage labels ----------------------------------------------------------- 8.33s 2026-02-09 03:09:29.381256 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.29s 2026-02-09 03:09:29.381261 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.79s 2026-02-09 03:09:29.381267 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.42s 2026-02-09 03:09:29.381272 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.36s 2026-02-09 03:09:29.381278 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.89s 2026-02-09 03:09:29.381285 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.66s 2026-02-09 03:09:29.381290 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 2.54s 2026-02-09 03:09:29.381296 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.86s 2026-02-09 03:09:29.381301 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.80s 2026-02-09 03:09:29.381307 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.72s 2026-02-09 03:09:29.381313 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.63s 2026-02-09 03:09:29.381337 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.59s 2026-02-09 03:09:29.381343 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.46s 2026-02-09 03:09:29.381348 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.40s 2026-02-09 03:09:29.723140 | orchestrator | + osism apply copy-kubeconfig 2026-02-09 03:09:41.795578 | orchestrator | 2026-02-09 03:09:41 | INFO  | Task 2de98ae6-85bc-4cd2-8970-566956aa46d0 (copy-kubeconfig) was prepared for execution. 2026-02-09 03:09:41.795723 | orchestrator | 2026-02-09 03:09:41 | INFO  | It takes a moment until task 2de98ae6-85bc-4cd2-8970-566956aa46d0 (copy-kubeconfig) has been started and output is visible here. 2026-02-09 03:09:48.966430 | orchestrator | 2026-02-09 03:09:48.966536 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-09 03:09:48.966555 | orchestrator | 2026-02-09 03:09:48.966568 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-09 03:09:48.966577 | orchestrator | Monday 09 February 2026 03:09:46 +0000 (0:00:00.188) 0:00:00.188 ******* 2026-02-09 03:09:48.966585 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-09 03:09:48.966592 | orchestrator | 2026-02-09 03:09:48.966599 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-09 03:09:48.966607 | orchestrator | Monday 09 February 2026 03:09:46 +0000 (0:00:00.773) 0:00:00.961 ******* 2026-02-09 03:09:48.966633 | orchestrator | changed: [testbed-manager] 2026-02-09 03:09:48.966641 | orchestrator | 2026-02-09 03:09:48.966649 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-09 03:09:48.966656 | orchestrator | Monday 09 February 2026 03:09:48 +0000 (0:00:01.296) 0:00:02.258 ******* 2026-02-09 03:09:48.966667 | orchestrator | changed: [testbed-manager] 2026-02-09 03:09:48.966674 | orchestrator | 2026-02-09 03:09:48.966684 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:09:48.966692 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:09:48.966700 | orchestrator | 2026-02-09 03:09:48.966706 | orchestrator | 2026-02-09 03:09:48.966713 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:09:48.966720 | orchestrator | Monday 09 February 2026 03:09:48 +0000 (0:00:00.478) 0:00:02.736 ******* 2026-02-09 03:09:48.966727 | orchestrator | =============================================================================== 2026-02-09 03:09:48.966734 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.30s 2026-02-09 03:09:48.966740 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.77s 2026-02-09 03:09:48.966747 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.48s 2026-02-09 03:09:49.307364 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-09 03:10:01.586455 | orchestrator | 2026-02-09 03:10:01 | INFO  | Task 9a9c0029-e7b0-40c7-9f76-b969b5a6900d (openstackclient) was prepared for execution. 2026-02-09 03:10:01.586548 | orchestrator | 2026-02-09 03:10:01 | INFO  | It takes a moment until task 9a9c0029-e7b0-40c7-9f76-b969b5a6900d (openstackclient) has been started and output is visible here. 2026-02-09 03:10:50.115680 | orchestrator | 2026-02-09 03:10:50.115816 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-09 03:10:50.115841 | orchestrator | 2026-02-09 03:10:50.115859 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-09 03:10:50.115876 | orchestrator | Monday 09 February 2026 03:10:06 +0000 (0:00:00.241) 0:00:00.241 ******* 2026-02-09 03:10:50.115895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-09 03:10:50.115914 | orchestrator | 2026-02-09 03:10:50.115965 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-09 03:10:50.116067 | orchestrator | Monday 09 February 2026 03:10:06 +0000 (0:00:00.231) 0:00:00.472 ******* 2026-02-09 03:10:50.116087 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-09 03:10:50.116105 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-09 03:10:50.116123 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-09 03:10:50.116138 | orchestrator | 2026-02-09 03:10:50.116155 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-09 03:10:50.116171 | orchestrator | Monday 09 February 2026 03:10:07 +0000 (0:00:01.279) 0:00:01.752 ******* 2026-02-09 03:10:50.116189 | orchestrator | changed: [testbed-manager] 2026-02-09 03:10:50.116208 | orchestrator | 2026-02-09 03:10:50.116227 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-09 03:10:50.116246 | orchestrator | Monday 09 February 2026 03:10:09 +0000 (0:00:01.483) 0:00:03.236 ******* 2026-02-09 03:10:50.116264 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-09 03:10:50.116284 | orchestrator | ok: [testbed-manager] 2026-02-09 03:10:50.116301 | orchestrator | 2026-02-09 03:10:50.116318 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-09 03:10:50.116336 | orchestrator | Monday 09 February 2026 03:10:44 +0000 (0:00:35.647) 0:00:38.884 ******* 2026-02-09 03:10:50.116352 | orchestrator | changed: [testbed-manager] 2026-02-09 03:10:50.116397 | orchestrator | 2026-02-09 03:10:50.116413 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-09 03:10:50.116428 | orchestrator | Monday 09 February 2026 03:10:45 +0000 (0:00:01.006) 0:00:39.890 ******* 2026-02-09 03:10:50.116445 | orchestrator | ok: [testbed-manager] 2026-02-09 03:10:50.116463 | orchestrator | 2026-02-09 03:10:50.116480 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-09 03:10:50.116497 | orchestrator | Monday 09 February 2026 03:10:46 +0000 (0:00:00.644) 0:00:40.535 ******* 2026-02-09 03:10:50.116514 | orchestrator | changed: [testbed-manager] 2026-02-09 03:10:50.116530 | orchestrator | 2026-02-09 03:10:50.116547 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-09 03:10:50.116560 | orchestrator | Monday 09 February 2026 03:10:47 +0000 (0:00:01.590) 0:00:42.126 ******* 2026-02-09 03:10:50.116570 | orchestrator | changed: [testbed-manager] 2026-02-09 03:10:50.116579 | orchestrator | 2026-02-09 03:10:50.116589 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-09 03:10:50.116598 | orchestrator | Monday 09 February 2026 03:10:48 +0000 (0:00:00.728) 0:00:42.854 ******* 2026-02-09 03:10:50.116608 | orchestrator | changed: [testbed-manager] 2026-02-09 03:10:50.116617 | orchestrator | 2026-02-09 03:10:50.116627 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-09 03:10:50.116636 | orchestrator | Monday 09 February 2026 03:10:49 +0000 (0:00:00.596) 0:00:43.450 ******* 2026-02-09 03:10:50.116651 | orchestrator | ok: [testbed-manager] 2026-02-09 03:10:50.116667 | orchestrator | 2026-02-09 03:10:50.116692 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:10:50.116708 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:10:50.116725 | orchestrator | 2026-02-09 03:10:50.116741 | orchestrator | 2026-02-09 03:10:50.116756 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:10:50.116772 | orchestrator | Monday 09 February 2026 03:10:49 +0000 (0:00:00.432) 0:00:43.883 ******* 2026-02-09 03:10:50.116788 | orchestrator | =============================================================================== 2026-02-09 03:10:50.116805 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.65s 2026-02-09 03:10:50.116820 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.59s 2026-02-09 03:10:50.116846 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.48s 2026-02-09 03:10:50.116856 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.28s 2026-02-09 03:10:50.116866 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.01s 2026-02-09 03:10:50.116875 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.73s 2026-02-09 03:10:50.116885 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.64s 2026-02-09 03:10:50.116895 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.60s 2026-02-09 03:10:50.116904 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.43s 2026-02-09 03:10:50.116914 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.23s 2026-02-09 03:10:52.553447 | orchestrator | 2026-02-09 03:10:52 | INFO  | Task 607b5ad5-7076-4fb7-bcd8-e5ab590dd4c1 (common) was prepared for execution. 2026-02-09 03:10:52.554618 | orchestrator | 2026-02-09 03:10:52 | INFO  | It takes a moment until task 607b5ad5-7076-4fb7-bcd8-e5ab590dd4c1 (common) has been started and output is visible here. 2026-02-09 03:11:04.960723 | orchestrator | 2026-02-09 03:11:04.960804 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-09 03:11:04.960814 | orchestrator | 2026-02-09 03:11:04.960821 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-09 03:11:04.960828 | orchestrator | Monday 09 February 2026 03:10:56 +0000 (0:00:00.285) 0:00:00.285 ******* 2026-02-09 03:11:04.960835 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:11:04.960843 | orchestrator | 2026-02-09 03:11:04.960849 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-09 03:11:04.960856 | orchestrator | Monday 09 February 2026 03:10:58 +0000 (0:00:01.314) 0:00:01.600 ******* 2026-02-09 03:11:04.960862 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 03:11:04.960868 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 03:11:04.960875 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 03:11:04.960881 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 03:11:04.960888 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 03:11:04.960894 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 03:11:04.960900 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 03:11:04.960906 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 03:11:04.960912 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 03:11:04.960926 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 03:11:04.960933 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 03:11:04.960939 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 03:11:04.960945 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 03:11:04.960951 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 03:11:04.960958 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 03:11:04.960964 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 03:11:04.960970 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 03:11:04.961015 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 03:11:04.961023 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 03:11:04.961029 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 03:11:04.961035 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 03:11:04.961042 | orchestrator | 2026-02-09 03:11:04.961048 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-09 03:11:04.961054 | orchestrator | Monday 09 February 2026 03:11:00 +0000 (0:00:02.658) 0:00:04.258 ******* 2026-02-09 03:11:04.961060 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:11:04.961068 | orchestrator | 2026-02-09 03:11:04.961074 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-09 03:11:04.961084 | orchestrator | Monday 09 February 2026 03:11:02 +0000 (0:00:01.364) 0:00:05.623 ******* 2026-02-09 03:11:04.961093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:04.961102 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:04.961124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:04.961131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:04.961138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:04.961144 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:04.961155 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:04.961162 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:04.961168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:04.961186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205255 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205283 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205328 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:06.205340 | orchestrator | 2026-02-09 03:11:06.205345 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-09 03:11:06.205350 | orchestrator | Monday 09 February 2026 03:11:05 +0000 (0:00:03.599) 0:00:09.222 ******* 2026-02-09 03:11:06.205355 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:06.205360 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.205364 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.205368 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:11:06.205373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:06.205384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.822619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.822779 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:11:06.822870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:06.822899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.822943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.822978 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:11:06.823057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:06.823094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.823113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.823159 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:06.823194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.823212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.823228 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:11:06.823244 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:11:06.823261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:06.823278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.823295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:06.823311 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:11:06.823328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:06.823358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:07.755831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:07.755921 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:11:07.755935 | orchestrator | 2026-02-09 03:11:07.755946 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-09 03:11:07.755958 | orchestrator | Monday 09 February 2026 03:11:06 +0000 (0:00:00.951) 0:00:10.173 ******* 2026-02-09 03:11:07.755970 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:07.755979 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:07.755986 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:07.756067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:07.756078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:07.756100 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:11:07.756107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:07.756113 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:11:07.756139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:07.756146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:07.756152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:07.756158 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:11:07.756164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:07.756173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:07.756187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:07.756207 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:11:07.756217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:07.756243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:12.922846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:12.922926 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:11:12.922938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:12.922947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:12.922954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:12.922961 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:11:12.922967 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 03:11:12.923016 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:12.923024 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:12.923031 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:11:12.923037 | orchestrator | 2026-02-09 03:11:12.923044 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-09 03:11:12.923052 | orchestrator | Monday 09 February 2026 03:11:08 +0000 (0:00:01.914) 0:00:12.088 ******* 2026-02-09 03:11:12.923058 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:11:12.923066 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:11:12.923077 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:11:12.923087 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:11:12.923113 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:11:12.923125 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:11:12.923136 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:11:12.923145 | orchestrator | 2026-02-09 03:11:12.923157 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-09 03:11:12.923164 | orchestrator | Monday 09 February 2026 03:11:09 +0000 (0:00:00.759) 0:00:12.848 ******* 2026-02-09 03:11:12.923170 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:11:12.923176 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:11:12.923182 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:11:12.923188 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:11:12.923194 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:11:12.923201 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:11:12.923207 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:11:12.923213 | orchestrator | 2026-02-09 03:11:12.923219 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-09 03:11:12.923225 | orchestrator | Monday 09 February 2026 03:11:10 +0000 (0:00:00.901) 0:00:13.750 ******* 2026-02-09 03:11:12.923232 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:12.923252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:12.923264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:12.923274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:12.923281 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:12.923287 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:12.923304 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:15.600029 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600153 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600174 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600179 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600203 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:15.600215 | orchestrator | 2026-02-09 03:11:15.600220 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-09 03:11:15.600225 | orchestrator | Monday 09 February 2026 03:11:13 +0000 (0:00:03.298) 0:00:17.049 ******* 2026-02-09 03:11:15.600229 | orchestrator | [WARNING]: Skipped 2026-02-09 03:11:15.600234 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-09 03:11:15.600239 | orchestrator | to this access issue: 2026-02-09 03:11:15.600243 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-09 03:11:15.600247 | orchestrator | directory 2026-02-09 03:11:15.600251 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 03:11:15.600255 | orchestrator | 2026-02-09 03:11:15.600259 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-09 03:11:15.600263 | orchestrator | Monday 09 February 2026 03:11:14 +0000 (0:00:00.956) 0:00:18.005 ******* 2026-02-09 03:11:15.600267 | orchestrator | [WARNING]: Skipped 2026-02-09 03:11:15.600273 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-09 03:11:26.126329 | orchestrator | to this access issue: 2026-02-09 03:11:26.126406 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-09 03:11:26.126413 | orchestrator | directory 2026-02-09 03:11:26.126417 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 03:11:26.126422 | orchestrator | 2026-02-09 03:11:26.126427 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-09 03:11:26.126433 | orchestrator | Monday 09 February 2026 03:11:15 +0000 (0:00:01.237) 0:00:19.242 ******* 2026-02-09 03:11:26.126453 | orchestrator | [WARNING]: Skipped 2026-02-09 03:11:26.126457 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-09 03:11:26.126461 | orchestrator | to this access issue: 2026-02-09 03:11:26.126465 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-09 03:11:26.126469 | orchestrator | directory 2026-02-09 03:11:26.126473 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 03:11:26.126477 | orchestrator | 2026-02-09 03:11:26.126481 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-09 03:11:26.126485 | orchestrator | Monday 09 February 2026 03:11:16 +0000 (0:00:00.846) 0:00:20.089 ******* 2026-02-09 03:11:26.126489 | orchestrator | [WARNING]: Skipped 2026-02-09 03:11:26.126492 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-09 03:11:26.126496 | orchestrator | to this access issue: 2026-02-09 03:11:26.126500 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-09 03:11:26.126503 | orchestrator | directory 2026-02-09 03:11:26.126507 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 03:11:26.126511 | orchestrator | 2026-02-09 03:11:26.126514 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-09 03:11:26.126518 | orchestrator | Monday 09 February 2026 03:11:17 +0000 (0:00:00.849) 0:00:20.939 ******* 2026-02-09 03:11:26.126522 | orchestrator | changed: [testbed-manager] 2026-02-09 03:11:26.126526 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:11:26.126529 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:11:26.126533 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:11:26.126537 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:11:26.126540 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:11:26.126556 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:11:26.126560 | orchestrator | 2026-02-09 03:11:26.126566 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-09 03:11:26.126573 | orchestrator | Monday 09 February 2026 03:11:20 +0000 (0:00:02.594) 0:00:23.533 ******* 2026-02-09 03:11:26.126581 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 03:11:26.126591 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 03:11:26.126597 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 03:11:26.126603 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 03:11:26.126610 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 03:11:26.126619 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 03:11:26.126625 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 03:11:26.126630 | orchestrator | 2026-02-09 03:11:26.126636 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-09 03:11:26.126642 | orchestrator | Monday 09 February 2026 03:11:22 +0000 (0:00:02.093) 0:00:25.627 ******* 2026-02-09 03:11:26.126648 | orchestrator | changed: [testbed-manager] 2026-02-09 03:11:26.126653 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:11:26.126659 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:11:26.126665 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:11:26.126671 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:11:26.126677 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:11:26.126682 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:11:26.126688 | orchestrator | 2026-02-09 03:11:26.126694 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-09 03:11:26.126707 | orchestrator | Monday 09 February 2026 03:11:24 +0000 (0:00:02.025) 0:00:27.653 ******* 2026-02-09 03:11:26.126716 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:26.126741 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:26.126755 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:26.126759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:26.126763 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:26.126770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:26.126781 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:26.126796 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:26.126803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:26.126815 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:31.753511 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:31.753616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:31.753632 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:31.753688 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:31.753700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:31.753730 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:31.753741 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:31.753769 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:31.753780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:11:31.753790 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:31.753802 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:31.753812 | orchestrator | 2026-02-09 03:11:31.753824 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-09 03:11:31.753835 | orchestrator | Monday 09 February 2026 03:11:26 +0000 (0:00:01.824) 0:00:29.478 ******* 2026-02-09 03:11:31.753845 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 03:11:31.753856 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 03:11:31.753872 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 03:11:31.753882 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 03:11:31.753891 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 03:11:31.753901 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 03:11:31.753910 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 03:11:31.753920 | orchestrator | 2026-02-09 03:11:31.753930 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-09 03:11:31.753939 | orchestrator | Monday 09 February 2026 03:11:27 +0000 (0:00:01.885) 0:00:31.364 ******* 2026-02-09 03:11:31.753949 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 03:11:31.753959 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 03:11:31.753968 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 03:11:31.753983 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 03:11:31.753994 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 03:11:31.754003 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 03:11:31.754115 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 03:11:31.754130 | orchestrator | 2026-02-09 03:11:31.754141 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-09 03:11:31.754153 | orchestrator | Monday 09 February 2026 03:11:29 +0000 (0:00:01.639) 0:00:33.003 ******* 2026-02-09 03:11:31.754165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:31.754188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:32.181791 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:32.181915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:32.181971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:32.182004 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:32.182150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 03:11:32.182163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:32.182174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:32.182207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:32.182219 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:32.182243 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:32.182254 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:32.182264 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:32.182275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:32.182291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:11:32.182320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:12:50.197334 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:12:50.197477 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:12:50.197494 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:12:50.197520 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:12:50.197532 | orchestrator | 2026-02-09 03:12:50.197545 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-09 03:12:50.197557 | orchestrator | Monday 09 February 2026 03:11:32 +0000 (0:00:02.524) 0:00:35.527 ******* 2026-02-09 03:12:50.197569 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:12:50.197581 | orchestrator | changed: [testbed-manager] 2026-02-09 03:12:50.197592 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:12:50.197603 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:12:50.197613 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:12:50.197624 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:12:50.197635 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:12:50.197646 | orchestrator | 2026-02-09 03:12:50.197675 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-09 03:12:50.197687 | orchestrator | Monday 09 February 2026 03:11:33 +0000 (0:00:01.527) 0:00:37.055 ******* 2026-02-09 03:12:50.197698 | orchestrator | changed: [testbed-manager] 2026-02-09 03:12:50.197709 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:12:50.197719 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:12:50.197730 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:12:50.197741 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:12:50.197751 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:12:50.197762 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:12:50.197773 | orchestrator | 2026-02-09 03:12:50.197784 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 03:12:50.197794 | orchestrator | Monday 09 February 2026 03:11:34 +0000 (0:00:01.071) 0:00:38.127 ******* 2026-02-09 03:12:50.197805 | orchestrator | 2026-02-09 03:12:50.197816 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 03:12:50.197827 | orchestrator | Monday 09 February 2026 03:11:34 +0000 (0:00:00.065) 0:00:38.193 ******* 2026-02-09 03:12:50.197838 | orchestrator | 2026-02-09 03:12:50.197849 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 03:12:50.197860 | orchestrator | Monday 09 February 2026 03:11:34 +0000 (0:00:00.066) 0:00:38.259 ******* 2026-02-09 03:12:50.197870 | orchestrator | 2026-02-09 03:12:50.197881 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 03:12:50.197892 | orchestrator | Monday 09 February 2026 03:11:34 +0000 (0:00:00.078) 0:00:38.337 ******* 2026-02-09 03:12:50.197903 | orchestrator | 2026-02-09 03:12:50.197913 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 03:12:50.197933 | orchestrator | Monday 09 February 2026 03:11:35 +0000 (0:00:00.342) 0:00:38.680 ******* 2026-02-09 03:12:50.197944 | orchestrator | 2026-02-09 03:12:50.197955 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 03:12:50.197966 | orchestrator | Monday 09 February 2026 03:11:35 +0000 (0:00:00.080) 0:00:38.760 ******* 2026-02-09 03:12:50.197983 | orchestrator | 2026-02-09 03:12:50.198002 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 03:12:50.198152 | orchestrator | Monday 09 February 2026 03:11:35 +0000 (0:00:00.065) 0:00:38.825 ******* 2026-02-09 03:12:50.198176 | orchestrator | 2026-02-09 03:12:50.198188 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-09 03:12:50.198199 | orchestrator | Monday 09 February 2026 03:11:35 +0000 (0:00:00.097) 0:00:38.923 ******* 2026-02-09 03:12:50.198209 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:12:50.198220 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:12:50.198231 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:12:50.198242 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:12:50.198253 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:12:50.198284 | orchestrator | changed: [testbed-manager] 2026-02-09 03:12:50.198296 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:12:50.198307 | orchestrator | 2026-02-09 03:12:50.198318 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-09 03:12:50.198328 | orchestrator | Monday 09 February 2026 03:12:11 +0000 (0:00:36.015) 0:01:14.938 ******* 2026-02-09 03:12:50.198339 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:12:50.198349 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:12:50.198360 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:12:50.198371 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:12:50.198381 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:12:50.198392 | orchestrator | changed: [testbed-manager] 2026-02-09 03:12:50.198402 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:12:50.198413 | orchestrator | 2026-02-09 03:12:50.198424 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-09 03:12:50.198435 | orchestrator | Monday 09 February 2026 03:12:39 +0000 (0:00:28.130) 0:01:43.069 ******* 2026-02-09 03:12:50.198445 | orchestrator | ok: [testbed-manager] 2026-02-09 03:12:50.198457 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:12:50.198468 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:12:50.198478 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:12:50.198489 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:12:50.198499 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:12:50.198510 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:12:50.198520 | orchestrator | 2026-02-09 03:12:50.198531 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-09 03:12:50.198542 | orchestrator | Monday 09 February 2026 03:12:41 +0000 (0:00:02.002) 0:01:45.072 ******* 2026-02-09 03:12:50.198553 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:12:50.198563 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:12:50.198574 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:12:50.198584 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:12:50.198595 | orchestrator | changed: [testbed-manager] 2026-02-09 03:12:50.198605 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:12:50.198616 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:12:50.198626 | orchestrator | 2026-02-09 03:12:50.198637 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:12:50.198649 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 03:12:50.198662 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 03:12:50.198684 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 03:12:50.198704 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 03:12:50.198715 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 03:12:50.198726 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 03:12:50.198737 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 03:12:50.198747 | orchestrator | 2026-02-09 03:12:50.198758 | orchestrator | 2026-02-09 03:12:50.198769 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:12:50.198780 | orchestrator | Monday 09 February 2026 03:12:50 +0000 (0:00:08.462) 0:01:53.535 ******* 2026-02-09 03:12:50.198791 | orchestrator | =============================================================================== 2026-02-09 03:12:50.198802 | orchestrator | common : Restart fluentd container ------------------------------------- 36.02s 2026-02-09 03:12:50.198812 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 28.13s 2026-02-09 03:12:50.198823 | orchestrator | common : Restart cron container ----------------------------------------- 8.46s 2026-02-09 03:12:50.198834 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.60s 2026-02-09 03:12:50.198844 | orchestrator | common : Copying over config.json files for services -------------------- 3.30s 2026-02-09 03:12:50.198855 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.66s 2026-02-09 03:12:50.198865 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.59s 2026-02-09 03:12:50.198876 | orchestrator | common : Check common containers ---------------------------------------- 2.52s 2026-02-09 03:12:50.198886 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.09s 2026-02-09 03:12:50.198897 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.03s 2026-02-09 03:12:50.198907 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.00s 2026-02-09 03:12:50.198918 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.91s 2026-02-09 03:12:50.198928 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.89s 2026-02-09 03:12:50.198939 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.82s 2026-02-09 03:12:50.198950 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.64s 2026-02-09 03:12:50.198960 | orchestrator | common : Creating log volume -------------------------------------------- 1.53s 2026-02-09 03:12:50.198977 | orchestrator | common : include_tasks -------------------------------------------------- 1.36s 2026-02-09 03:12:50.643889 | orchestrator | common : include_tasks -------------------------------------------------- 1.31s 2026-02-09 03:12:50.643966 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.24s 2026-02-09 03:12:50.643972 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.07s 2026-02-09 03:12:53.014564 | orchestrator | 2026-02-09 03:12:53 | INFO  | Task 372600ab-bb8c-4748-a2f2-c08b2a70be52 (loadbalancer) was prepared for execution. 2026-02-09 03:12:53.014638 | orchestrator | 2026-02-09 03:12:53 | INFO  | It takes a moment until task 372600ab-bb8c-4748-a2f2-c08b2a70be52 (loadbalancer) has been started and output is visible here. 2026-02-09 03:13:08.192484 | orchestrator | 2026-02-09 03:13:08.192574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 03:13:08.192586 | orchestrator | 2026-02-09 03:13:08.192595 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 03:13:08.192604 | orchestrator | Monday 09 February 2026 03:12:57 +0000 (0:00:00.291) 0:00:00.291 ******* 2026-02-09 03:13:08.192635 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:13:08.192646 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:13:08.192655 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:13:08.192663 | orchestrator | 2026-02-09 03:13:08.192672 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 03:13:08.192681 | orchestrator | Monday 09 February 2026 03:12:57 +0000 (0:00:00.350) 0:00:00.641 ******* 2026-02-09 03:13:08.192691 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-09 03:13:08.192705 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-09 03:13:08.192718 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-09 03:13:08.192730 | orchestrator | 2026-02-09 03:13:08.192743 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-09 03:13:08.192756 | orchestrator | 2026-02-09 03:13:08.192767 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-09 03:13:08.192794 | orchestrator | Monday 09 February 2026 03:12:58 +0000 (0:00:00.441) 0:00:01.083 ******* 2026-02-09 03:13:08.192809 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:13:08.192823 | orchestrator | 2026-02-09 03:13:08.192836 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-09 03:13:08.192849 | orchestrator | Monday 09 February 2026 03:12:58 +0000 (0:00:00.535) 0:00:01.618 ******* 2026-02-09 03:13:08.192863 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:13:08.192876 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:13:08.192889 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:13:08.192897 | orchestrator | 2026-02-09 03:13:08.192905 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-09 03:13:08.192912 | orchestrator | Monday 09 February 2026 03:12:59 +0000 (0:00:00.591) 0:00:02.209 ******* 2026-02-09 03:13:08.192920 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:13:08.192928 | orchestrator | 2026-02-09 03:13:08.192936 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-09 03:13:08.192943 | orchestrator | Monday 09 February 2026 03:13:00 +0000 (0:00:00.677) 0:00:02.886 ******* 2026-02-09 03:13:08.192951 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:13:08.192959 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:13:08.192967 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:13:08.192974 | orchestrator | 2026-02-09 03:13:08.192982 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-09 03:13:08.192990 | orchestrator | Monday 09 February 2026 03:13:00 +0000 (0:00:00.598) 0:00:03.485 ******* 2026-02-09 03:13:08.192998 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-09 03:13:08.193006 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-09 03:13:08.193014 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-09 03:13:08.193022 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-09 03:13:08.193030 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-09 03:13:08.193040 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-09 03:13:08.193049 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-09 03:13:08.193059 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-09 03:13:08.193068 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-09 03:13:08.193077 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-09 03:13:08.193123 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-09 03:13:08.193133 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-09 03:13:08.193142 | orchestrator | 2026-02-09 03:13:08.193155 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-09 03:13:08.193169 | orchestrator | Monday 09 February 2026 03:13:03 +0000 (0:00:03.201) 0:00:06.687 ******* 2026-02-09 03:13:08.193182 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-09 03:13:08.193196 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-09 03:13:08.193211 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-09 03:13:08.193226 | orchestrator | 2026-02-09 03:13:08.193241 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-09 03:13:08.193256 | orchestrator | Monday 09 February 2026 03:13:04 +0000 (0:00:00.708) 0:00:07.395 ******* 2026-02-09 03:13:08.193269 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-09 03:13:08.193278 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-09 03:13:08.193286 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-09 03:13:08.193293 | orchestrator | 2026-02-09 03:13:08.193301 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-09 03:13:08.193309 | orchestrator | Monday 09 February 2026 03:13:05 +0000 (0:00:01.231) 0:00:08.627 ******* 2026-02-09 03:13:08.193317 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-09 03:13:08.193325 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:08.193351 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-09 03:13:08.193359 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:08.193367 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-09 03:13:08.193375 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:08.193383 | orchestrator | 2026-02-09 03:13:08.193391 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-09 03:13:08.193399 | orchestrator | Monday 09 February 2026 03:13:06 +0000 (0:00:00.567) 0:00:09.195 ******* 2026-02-09 03:13:08.193416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:08.193431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:08.193440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:08.193454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:08.193463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:08.193478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:13.503394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:13.503531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:13.503546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:13.503554 | orchestrator | 2026-02-09 03:13:13.503562 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-09 03:13:13.503570 | orchestrator | Monday 09 February 2026 03:13:08 +0000 (0:00:01.804) 0:00:10.999 ******* 2026-02-09 03:13:13.503576 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:13:13.503602 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:13:13.503608 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:13:13.503615 | orchestrator | 2026-02-09 03:13:13.503621 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-09 03:13:13.503636 | orchestrator | Monday 09 February 2026 03:13:09 +0000 (0:00:00.865) 0:00:11.864 ******* 2026-02-09 03:13:13.503644 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-09 03:13:13.503651 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-09 03:13:13.503656 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-09 03:13:13.503675 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-09 03:13:13.503682 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-09 03:13:13.503688 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-09 03:13:13.503694 | orchestrator | 2026-02-09 03:13:13.503700 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-09 03:13:13.503713 | orchestrator | Monday 09 February 2026 03:13:10 +0000 (0:00:01.468) 0:00:13.333 ******* 2026-02-09 03:13:13.503720 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:13:13.503726 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:13:13.503732 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:13:13.503738 | orchestrator | 2026-02-09 03:13:13.503744 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-09 03:13:13.503750 | orchestrator | Monday 09 February 2026 03:13:11 +0000 (0:00:00.995) 0:00:14.328 ******* 2026-02-09 03:13:13.503755 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:13:13.503761 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:13:13.503775 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:13:13.503780 | orchestrator | 2026-02-09 03:13:13.503786 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-09 03:13:13.503791 | orchestrator | Monday 09 February 2026 03:13:12 +0000 (0:00:01.375) 0:00:15.704 ******* 2026-02-09 03:13:13.503798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 03:13:13.503853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:13.503862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:13.503869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 03:13:13.503885 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:13.503892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 03:13:13.503945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:13.503961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:13.503968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 03:13:13.503974 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:13.503987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 03:13:16.216761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:16.216856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:16.216866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 03:13:16.216873 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:16.216881 | orchestrator | 2026-02-09 03:13:16.216888 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-09 03:13:16.216895 | orchestrator | Monday 09 February 2026 03:13:13 +0000 (0:00:00.603) 0:00:16.308 ******* 2026-02-09 03:13:16.216901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:16.216909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:16.216915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:16.216955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:16.216961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:16.216965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 03:13:16.216969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:16.216973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:16.216977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 03:13:16.216997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:24.592595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:24.592737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138', '__omit_place_holder__c422b486e5f4e86d17930879b6bd60b500bda138'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 03:13:24.592765 | orchestrator | 2026-02-09 03:13:24.592783 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-09 03:13:24.592801 | orchestrator | Monday 09 February 2026 03:13:16 +0000 (0:00:02.711) 0:00:19.019 ******* 2026-02-09 03:13:24.592818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:24.592835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:24.592852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:24.592912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:24.592953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:24.592970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:24.592987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:24.593003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:24.593019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:24.593034 | orchestrator | 2026-02-09 03:13:24.593049 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-09 03:13:24.593064 | orchestrator | Monday 09 February 2026 03:13:19 +0000 (0:00:03.094) 0:00:22.114 ******* 2026-02-09 03:13:24.593113 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-09 03:13:24.593132 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-09 03:13:24.593147 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-09 03:13:24.593162 | orchestrator | 2026-02-09 03:13:24.593178 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-09 03:13:24.593193 | orchestrator | Monday 09 February 2026 03:13:21 +0000 (0:00:01.890) 0:00:24.004 ******* 2026-02-09 03:13:24.593207 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-09 03:13:24.593221 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-09 03:13:24.593235 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-09 03:13:24.593250 | orchestrator | 2026-02-09 03:13:24.593265 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-09 03:13:24.593280 | orchestrator | Monday 09 February 2026 03:13:24 +0000 (0:00:02.821) 0:00:26.826 ******* 2026-02-09 03:13:24.593295 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:24.593311 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:24.593325 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:24.593340 | orchestrator | 2026-02-09 03:13:24.593364 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-09 03:13:36.325231 | orchestrator | Monday 09 February 2026 03:13:24 +0000 (0:00:00.576) 0:00:27.402 ******* 2026-02-09 03:13:36.325315 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-09 03:13:36.325335 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-09 03:13:36.325342 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-09 03:13:36.325348 | orchestrator | 2026-02-09 03:13:36.325357 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-09 03:13:36.325368 | orchestrator | Monday 09 February 2026 03:13:26 +0000 (0:00:02.283) 0:00:29.686 ******* 2026-02-09 03:13:36.325378 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-09 03:13:36.325389 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-09 03:13:36.325399 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-09 03:13:36.325407 | orchestrator | 2026-02-09 03:13:36.325415 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-09 03:13:36.325424 | orchestrator | Monday 09 February 2026 03:13:29 +0000 (0:00:02.158) 0:00:31.844 ******* 2026-02-09 03:13:36.325433 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-09 03:13:36.325442 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-09 03:13:36.325451 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-09 03:13:36.325459 | orchestrator | 2026-02-09 03:13:36.325481 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-09 03:13:36.325492 | orchestrator | Monday 09 February 2026 03:13:30 +0000 (0:00:01.419) 0:00:33.264 ******* 2026-02-09 03:13:36.325502 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-09 03:13:36.325511 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-09 03:13:36.325520 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-09 03:13:36.325529 | orchestrator | 2026-02-09 03:13:36.325561 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-09 03:13:36.325572 | orchestrator | Monday 09 February 2026 03:13:31 +0000 (0:00:01.451) 0:00:34.715 ******* 2026-02-09 03:13:36.325583 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:13:36.325592 | orchestrator | 2026-02-09 03:13:36.325602 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-09 03:13:36.325611 | orchestrator | Monday 09 February 2026 03:13:32 +0000 (0:00:00.637) 0:00:35.352 ******* 2026-02-09 03:13:36.325624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:36.325638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:36.325653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:36.325683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:36.325694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:36.325704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:36.325722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:36.325734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:36.325745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:36.325756 | orchestrator | 2026-02-09 03:13:36.325765 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-09 03:13:36.325785 | orchestrator | Monday 09 February 2026 03:13:35 +0000 (0:00:03.186) 0:00:38.539 ******* 2026-02-09 03:13:36.325811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 03:13:37.119428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:37.119527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:37.119565 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:37.119574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 03:13:37.119581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:37.119587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:37.119592 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:37.119609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 03:13:37.119631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:37.119638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:37.119647 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:37.119653 | orchestrator | 2026-02-09 03:13:37.119659 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-09 03:13:37.119665 | orchestrator | Monday 09 February 2026 03:13:36 +0000 (0:00:00.598) 0:00:39.138 ******* 2026-02-09 03:13:37.119671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 03:13:37.119677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:37.119682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:37.119687 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:37.119693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 03:13:37.119705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:38.014985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:38.015155 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:38.015178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 03:13:38.015191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:38.015202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:38.015213 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:38.015225 | orchestrator | 2026-02-09 03:13:38.015236 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-09 03:13:38.015248 | orchestrator | Monday 09 February 2026 03:13:37 +0000 (0:00:00.793) 0:00:39.931 ******* 2026-02-09 03:13:38.015259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 03:13:38.015270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:38.015301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:38.015323 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:38.015347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 03:13:38.015357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:38.015367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:38.015377 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:38.015388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 03:13:38.015414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:38.015429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:38.015453 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:39.440789 | orchestrator | 2026-02-09 03:13:39.440861 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-09 03:13:39.440870 | orchestrator | Monday 09 February 2026 03:13:37 +0000 (0:00:00.882) 0:00:40.814 ******* 2026-02-09 03:13:39.440879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 03:13:39.440899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:39.440906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:39.440918 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:39.440924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 03:13:39.440930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:39.440952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:39.440971 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:39.440989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 03:13:39.440994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:39.441000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:39.441005 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:39.441010 | orchestrator | 2026-02-09 03:13:39.441015 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-09 03:13:39.441020 | orchestrator | Monday 09 February 2026 03:13:38 +0000 (0:00:00.608) 0:00:41.423 ******* 2026-02-09 03:13:39.441025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 03:13:39.441030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:39.441046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:39.441051 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:39.441061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 03:13:40.592664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:40.592736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:40.592743 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:40.592749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 03:13:40.592754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:40.592758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:40.592777 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:40.592781 | orchestrator | 2026-02-09 03:13:40.592786 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-09 03:13:40.592791 | orchestrator | Monday 09 February 2026 03:13:39 +0000 (0:00:00.826) 0:00:42.250 ******* 2026-02-09 03:13:40.592806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 03:13:40.592821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:40.592825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:40.592829 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:40.592833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 03:13:40.592837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:40.592845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:40.592848 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:40.592855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 03:13:40.592862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:42.141425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:42.141513 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:42.141524 | orchestrator | 2026-02-09 03:13:42.141531 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-09 03:13:42.141539 | orchestrator | Monday 09 February 2026 03:13:40 +0000 (0:00:01.151) 0:00:43.401 ******* 2026-02-09 03:13:42.141548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 03:13:42.141555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:42.141579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:42.141586 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:42.141593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 03:13:42.141616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:42.141650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:42.141663 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:42.141672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 03:13:42.141682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:42.141700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:42.141711 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:42.141721 | orchestrator | 2026-02-09 03:13:42.141731 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-09 03:13:42.141742 | orchestrator | Monday 09 February 2026 03:13:41 +0000 (0:00:00.616) 0:00:44.018 ******* 2026-02-09 03:13:42.141750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 03:13:42.141757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:42.141775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:48.624252 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:48.624336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 03:13:48.624345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:48.624368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:48.624373 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:48.624378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 03:13:48.624393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 03:13:48.624397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 03:13:48.624401 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:48.624405 | orchestrator | 2026-02-09 03:13:48.624410 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-09 03:13:48.624416 | orchestrator | Monday 09 February 2026 03:13:42 +0000 (0:00:00.922) 0:00:44.940 ******* 2026-02-09 03:13:48.624420 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-09 03:13:48.624436 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-09 03:13:48.624441 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-09 03:13:48.624448 | orchestrator | 2026-02-09 03:13:48.624454 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-09 03:13:48.624459 | orchestrator | Monday 09 February 2026 03:13:43 +0000 (0:00:01.644) 0:00:46.585 ******* 2026-02-09 03:13:48.624463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-09 03:13:48.624468 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-09 03:13:48.624472 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-09 03:13:48.624475 | orchestrator | 2026-02-09 03:13:48.624484 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-09 03:13:48.624488 | orchestrator | Monday 09 February 2026 03:13:45 +0000 (0:00:01.646) 0:00:48.232 ******* 2026-02-09 03:13:48.624492 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-09 03:13:48.624496 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-09 03:13:48.624500 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-09 03:13:48.624504 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-09 03:13:48.624508 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:48.624512 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-09 03:13:48.624516 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:48.624520 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-09 03:13:48.624524 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:13:48.624528 | orchestrator | 2026-02-09 03:13:48.624532 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-09 03:13:48.624536 | orchestrator | Monday 09 February 2026 03:13:46 +0000 (0:00:00.845) 0:00:49.077 ******* 2026-02-09 03:13:48.624540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:48.624545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:48.624552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-09 03:13:48.624562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:52.760302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:52.760388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 03:13:52.760400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:52.760407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:52.760428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 03:13:52.760435 | orchestrator | 2026-02-09 03:13:52.760443 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-09 03:13:52.760451 | orchestrator | Monday 09 February 2026 03:13:48 +0000 (0:00:02.351) 0:00:51.428 ******* 2026-02-09 03:13:52.760458 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:13:52.760464 | orchestrator | 2026-02-09 03:13:52.760471 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-09 03:13:52.760477 | orchestrator | Monday 09 February 2026 03:13:49 +0000 (0:00:00.782) 0:00:52.211 ******* 2026-02-09 03:13:52.760499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 03:13:52.760525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 03:13:52.760532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 03:13:52.760539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:13:52.760546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 03:13:52.760556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 03:13:52.760563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:13:52.760581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 03:13:53.503197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 03:13:53.503288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 03:13:53.503301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:13:53.503324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 03:13:53.503333 | orchestrator | 2026-02-09 03:13:53.503343 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-09 03:13:53.503352 | orchestrator | Monday 09 February 2026 03:13:52 +0000 (0:00:03.355) 0:00:55.567 ******* 2026-02-09 03:13:53.503362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 03:13:53.503403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 03:13:53.503413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:13:53.503421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 03:13:53.503430 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:13:53.503439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 03:13:53.503452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 03:13:53.503467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:13:53.503475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 03:13:53.503483 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:13:53.503499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 03:14:02.323567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 03:14:02.323648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:14:02.323656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 03:14:02.323678 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:02.323686 | orchestrator | 2026-02-09 03:14:02.323693 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-09 03:14:02.323699 | orchestrator | Monday 09 February 2026 03:13:53 +0000 (0:00:00.744) 0:00:56.311 ******* 2026-02-09 03:14:02.323706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-09 03:14:02.323713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-09 03:14:02.323720 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:02.323738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-09 03:14:02.323744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-09 03:14:02.323749 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:02.323754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-09 03:14:02.323760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-09 03:14:02.323765 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:02.323770 | orchestrator | 2026-02-09 03:14:02.323775 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-09 03:14:02.323781 | orchestrator | Monday 09 February 2026 03:13:54 +0000 (0:00:01.337) 0:00:57.649 ******* 2026-02-09 03:14:02.323786 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:14:02.323791 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:14:02.323796 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:14:02.323801 | orchestrator | 2026-02-09 03:14:02.323806 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-09 03:14:02.323812 | orchestrator | Monday 09 February 2026 03:13:56 +0000 (0:00:01.312) 0:00:58.962 ******* 2026-02-09 03:14:02.323817 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:14:02.323822 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:14:02.323827 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:14:02.323832 | orchestrator | 2026-02-09 03:14:02.323837 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-09 03:14:02.323843 | orchestrator | Monday 09 February 2026 03:13:58 +0000 (0:00:02.000) 0:01:00.963 ******* 2026-02-09 03:14:02.323848 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:14:02.323853 | orchestrator | 2026-02-09 03:14:02.323871 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-09 03:14:02.323876 | orchestrator | Monday 09 February 2026 03:13:58 +0000 (0:00:00.624) 0:01:01.588 ******* 2026-02-09 03:14:02.323883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 03:14:02.323900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:14:02.323907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:02.323912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 03:14:02.323918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:14:02.323928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:03.065627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 03:14:03.065767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:14:03.065795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:03.065814 | orchestrator | 2026-02-09 03:14:03.065833 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-09 03:14:03.065852 | orchestrator | Monday 09 February 2026 03:14:02 +0000 (0:00:03.539) 0:01:05.128 ******* 2026-02-09 03:14:03.065871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 03:14:03.065889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:14:03.065964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:03.065989 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:03.066116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 03:14:03.066184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:14:03.066202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:03.066354 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:03.066372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 03:14:03.066414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 03:14:12.684439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:12.684534 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:12.684548 | orchestrator | 2026-02-09 03:14:12.684557 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-09 03:14:12.684567 | orchestrator | Monday 09 February 2026 03:14:03 +0000 (0:00:00.743) 0:01:05.871 ******* 2026-02-09 03:14:12.684590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-09 03:14:12.684601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-09 03:14:12.684611 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:12.684620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-09 03:14:12.684628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-09 03:14:12.684636 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:12.684644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-09 03:14:12.684652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-09 03:14:12.684660 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:12.684668 | orchestrator | 2026-02-09 03:14:12.684676 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-09 03:14:12.684684 | orchestrator | Monday 09 February 2026 03:14:03 +0000 (0:00:00.918) 0:01:06.789 ******* 2026-02-09 03:14:12.684692 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:14:12.684700 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:14:12.684708 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:14:12.684715 | orchestrator | 2026-02-09 03:14:12.684723 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-09 03:14:12.684731 | orchestrator | Monday 09 February 2026 03:14:05 +0000 (0:00:01.553) 0:01:08.343 ******* 2026-02-09 03:14:12.684804 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:14:12.684813 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:14:12.684821 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:14:12.684828 | orchestrator | 2026-02-09 03:14:12.684836 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-09 03:14:12.684844 | orchestrator | Monday 09 February 2026 03:14:07 +0000 (0:00:01.967) 0:01:10.310 ******* 2026-02-09 03:14:12.684854 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:12.684867 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:12.684880 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:12.684893 | orchestrator | 2026-02-09 03:14:12.684906 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-09 03:14:12.684920 | orchestrator | Monday 09 February 2026 03:14:07 +0000 (0:00:00.313) 0:01:10.623 ******* 2026-02-09 03:14:12.684933 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:14:12.684947 | orchestrator | 2026-02-09 03:14:12.684961 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-09 03:14:12.684976 | orchestrator | Monday 09 February 2026 03:14:08 +0000 (0:00:00.684) 0:01:11.308 ******* 2026-02-09 03:14:12.685016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-09 03:14:12.685043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-09 03:14:12.685055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-09 03:14:12.685065 | orchestrator | 2026-02-09 03:14:12.685074 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-09 03:14:12.685085 | orchestrator | Monday 09 February 2026 03:14:11 +0000 (0:00:02.813) 0:01:14.121 ******* 2026-02-09 03:14:12.685103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-09 03:14:12.685113 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:12.685123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-09 03:14:12.685159 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:12.685177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-09 03:14:20.505381 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:20.505467 | orchestrator | 2026-02-09 03:14:20.505475 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-09 03:14:20.505481 | orchestrator | Monday 09 February 2026 03:14:12 +0000 (0:00:01.372) 0:01:15.494 ******* 2026-02-09 03:14:20.505499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 03:14:20.505506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 03:14:20.505512 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:20.505531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 03:14:20.505536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 03:14:20.505540 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:20.505543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 03:14:20.505547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 03:14:20.505551 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:20.505555 | orchestrator | 2026-02-09 03:14:20.505559 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-09 03:14:20.505563 | orchestrator | Monday 09 February 2026 03:14:14 +0000 (0:00:01.689) 0:01:17.183 ******* 2026-02-09 03:14:20.505567 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:20.505571 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:20.505574 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:20.505578 | orchestrator | 2026-02-09 03:14:20.505585 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-09 03:14:20.505588 | orchestrator | Monday 09 February 2026 03:14:14 +0000 (0:00:00.428) 0:01:17.612 ******* 2026-02-09 03:14:20.505592 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:20.505596 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:20.505600 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:20.505603 | orchestrator | 2026-02-09 03:14:20.505607 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-09 03:14:20.505611 | orchestrator | Monday 09 February 2026 03:14:16 +0000 (0:00:01.413) 0:01:19.026 ******* 2026-02-09 03:14:20.505615 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:14:20.505619 | orchestrator | 2026-02-09 03:14:20.505622 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-09 03:14:20.505626 | orchestrator | Monday 09 February 2026 03:14:17 +0000 (0:00:00.953) 0:01:19.979 ******* 2026-02-09 03:14:20.505646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 03:14:20.505657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:14:20.505662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 03:14:20.505667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 03:14:20.505671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 03:14:20.505678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:14:21.190824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 03:14:21.190935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 03:14:21.190949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 03:14:21.190957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:14:21.190964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 03:14:21.190993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 03:14:21.191005 | orchestrator | 2026-02-09 03:14:21.191013 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-09 03:14:21.191022 | orchestrator | Monday 09 February 2026 03:14:20 +0000 (0:00:03.435) 0:01:23.415 ******* 2026-02-09 03:14:21.191029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 03:14:21.191035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:14:21.191042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 03:14:21.191049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 03:14:21.191056 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:21.191074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 03:14:27.347853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:14:27.347966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 03:14:27.347986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 03:14:27.348001 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:27.348016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 03:14:27.348035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:14:27.348128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 03:14:27.348190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 03:14:27.348211 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:27.348229 | orchestrator | 2026-02-09 03:14:27.348251 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-09 03:14:27.348271 | orchestrator | Monday 09 February 2026 03:14:21 +0000 (0:00:00.689) 0:01:24.104 ******* 2026-02-09 03:14:27.348291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-09 03:14:27.348311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-09 03:14:27.348332 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:27.348344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-09 03:14:27.348355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-09 03:14:27.348366 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:27.348377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-09 03:14:27.348388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-09 03:14:27.348398 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:27.348409 | orchestrator | 2026-02-09 03:14:27.348420 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-09 03:14:27.348431 | orchestrator | Monday 09 February 2026 03:14:22 +0000 (0:00:01.137) 0:01:25.242 ******* 2026-02-09 03:14:27.348441 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:14:27.348464 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:14:27.348475 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:14:27.348486 | orchestrator | 2026-02-09 03:14:27.348497 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-09 03:14:27.348507 | orchestrator | Monday 09 February 2026 03:14:23 +0000 (0:00:01.279) 0:01:26.522 ******* 2026-02-09 03:14:27.348518 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:14:27.348529 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:14:27.348540 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:14:27.348551 | orchestrator | 2026-02-09 03:14:27.348561 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-09 03:14:27.348572 | orchestrator | Monday 09 February 2026 03:14:25 +0000 (0:00:01.985) 0:01:28.507 ******* 2026-02-09 03:14:27.348583 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:27.348593 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:27.348604 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:27.348614 | orchestrator | 2026-02-09 03:14:27.348625 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-09 03:14:27.348636 | orchestrator | Monday 09 February 2026 03:14:25 +0000 (0:00:00.312) 0:01:28.820 ******* 2026-02-09 03:14:27.348646 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:27.348657 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:27.348668 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:27.348678 | orchestrator | 2026-02-09 03:14:27.348689 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-09 03:14:27.348699 | orchestrator | Monday 09 February 2026 03:14:26 +0000 (0:00:00.325) 0:01:29.145 ******* 2026-02-09 03:14:27.348710 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:14:27.348720 | orchestrator | 2026-02-09 03:14:27.348731 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-09 03:14:27.348749 | orchestrator | Monday 09 February 2026 03:14:27 +0000 (0:00:01.010) 0:01:30.156 ******* 2026-02-09 03:14:30.693968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 03:14:30.694091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 03:14:30.694101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 03:14:30.694125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 03:14:30.694131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 03:14:30.694137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:30.694201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 03:14:30.694208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 03:14:30.694214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 03:14:30.694225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 03:14:30.694230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 03:14:30.694242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 03:14:31.518721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.518806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 03:14:31.518815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.518844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.518851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.518857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.518893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.518900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.518906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.518918 | orchestrator | 2026-02-09 03:14:31.518926 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-09 03:14:31.518933 | orchestrator | Monday 09 February 2026 03:14:30 +0000 (0:00:03.565) 0:01:33.722 ******* 2026-02-09 03:14:31.518940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 03:14:31.518947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 03:14:31.518953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.518966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.994423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.994555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 03:14:31.994615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.994638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 03:14:31.994656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.994676 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:31.995387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.995458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.995472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.995498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.995515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 03:14:31.995527 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:31.995540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 03:14:31.995553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 03:14:31.995574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 03:14:41.875971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 03:14:41.876067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 03:14:41.876096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:14:41.876107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 03:14:41.876116 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:41.876127 | orchestrator | 2026-02-09 03:14:41.876136 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-09 03:14:41.876236 | orchestrator | Monday 09 February 2026 03:14:31 +0000 (0:00:01.081) 0:01:34.803 ******* 2026-02-09 03:14:41.876248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-09 03:14:41.876261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-09 03:14:41.876272 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:41.876280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-09 03:14:41.876289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-09 03:14:41.876298 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:41.876305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-09 03:14:41.876328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-09 03:14:41.876333 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:41.876339 | orchestrator | 2026-02-09 03:14:41.876345 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-09 03:14:41.876366 | orchestrator | Monday 09 February 2026 03:14:33 +0000 (0:00:01.302) 0:01:36.106 ******* 2026-02-09 03:14:41.876373 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:14:41.876378 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:14:41.876384 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:14:41.876389 | orchestrator | 2026-02-09 03:14:41.876394 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-09 03:14:41.876400 | orchestrator | Monday 09 February 2026 03:14:34 +0000 (0:00:01.298) 0:01:37.404 ******* 2026-02-09 03:14:41.876405 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:14:41.876411 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:14:41.876416 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:14:41.876422 | orchestrator | 2026-02-09 03:14:41.876427 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-09 03:14:41.876432 | orchestrator | Monday 09 February 2026 03:14:36 +0000 (0:00:01.958) 0:01:39.362 ******* 2026-02-09 03:14:41.876438 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:41.876443 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:41.876449 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:41.876454 | orchestrator | 2026-02-09 03:14:41.876459 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-09 03:14:41.876465 | orchestrator | Monday 09 February 2026 03:14:36 +0000 (0:00:00.314) 0:01:39.677 ******* 2026-02-09 03:14:41.876470 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:14:41.876476 | orchestrator | 2026-02-09 03:14:41.876481 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-09 03:14:41.876487 | orchestrator | Monday 09 February 2026 03:14:37 +0000 (0:00:01.023) 0:01:40.700 ******* 2026-02-09 03:14:41.876503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 03:14:41.876516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 03:14:45.084814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 03:14:45.084913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 03:14:45.084967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 03:14:45.084979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 03:14:45.084994 | orchestrator | 2026-02-09 03:14:45.085005 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-09 03:14:45.085014 | orchestrator | Monday 09 February 2026 03:14:41 +0000 (0:00:04.114) 0:01:44.815 ******* 2026-02-09 03:14:45.085036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 03:14:45.190065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 03:14:45.190251 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:45.190282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 03:14:45.190338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 03:14:45.190355 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:45.190363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 03:14:45.190382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 03:14:57.517342 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:57.517432 | orchestrator | 2026-02-09 03:14:57.517444 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-09 03:14:57.517453 | orchestrator | Monday 09 February 2026 03:14:45 +0000 (0:00:03.189) 0:01:48.004 ******* 2026-02-09 03:14:57.517462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 03:14:57.517472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 03:14:57.517481 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:57.517488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 03:14:57.517496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 03:14:57.517503 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:57.517511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 03:14:57.517532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 03:14:57.517539 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:57.517546 | orchestrator | 2026-02-09 03:14:57.517553 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-09 03:14:57.517560 | orchestrator | Monday 09 February 2026 03:14:48 +0000 (0:00:03.615) 0:01:51.620 ******* 2026-02-09 03:14:57.517586 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:14:57.517593 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:14:57.517600 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:14:57.517606 | orchestrator | 2026-02-09 03:14:57.517613 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-09 03:14:57.517620 | orchestrator | Monday 09 February 2026 03:14:50 +0000 (0:00:01.331) 0:01:52.951 ******* 2026-02-09 03:14:57.517627 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:14:57.517634 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:14:57.517641 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:14:57.517647 | orchestrator | 2026-02-09 03:14:57.517654 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-09 03:14:57.517673 | orchestrator | Monday 09 February 2026 03:14:52 +0000 (0:00:02.177) 0:01:55.129 ******* 2026-02-09 03:14:57.517680 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:57.517687 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:14:57.517694 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:14:57.517700 | orchestrator | 2026-02-09 03:14:57.517707 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-09 03:14:57.517714 | orchestrator | Monday 09 February 2026 03:14:52 +0000 (0:00:00.327) 0:01:55.457 ******* 2026-02-09 03:14:57.517720 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:14:57.517727 | orchestrator | 2026-02-09 03:14:57.517734 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-09 03:14:57.517741 | orchestrator | Monday 09 February 2026 03:14:53 +0000 (0:00:01.317) 0:01:56.774 ******* 2026-02-09 03:14:57.517748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 03:14:57.517757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 03:14:57.517764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 03:14:57.517771 | orchestrator | 2026-02-09 03:14:57.517778 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-09 03:14:57.517792 | orchestrator | Monday 09 February 2026 03:14:57 +0000 (0:00:03.148) 0:01:59.923 ******* 2026-02-09 03:14:57.517800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-09 03:14:57.517807 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:14:57.517819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-09 03:15:06.444010 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:06.444107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-09 03:15:06.444210 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:06.444227 | orchestrator | 2026-02-09 03:15:06.444237 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-09 03:15:06.444246 | orchestrator | Monday 09 February 2026 03:14:57 +0000 (0:00:00.401) 0:02:00.324 ******* 2026-02-09 03:15:06.444256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-09 03:15:06.444267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-09 03:15:06.444277 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:06.444284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-09 03:15:06.444292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-09 03:15:06.444300 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:06.444308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-09 03:15:06.444317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-09 03:15:06.444352 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:06.444360 | orchestrator | 2026-02-09 03:15:06.444368 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-09 03:15:06.444375 | orchestrator | Monday 09 February 2026 03:14:58 +0000 (0:00:00.897) 0:02:01.221 ******* 2026-02-09 03:15:06.444383 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:15:06.444390 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:15:06.444398 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:15:06.444405 | orchestrator | 2026-02-09 03:15:06.444414 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-09 03:15:06.444422 | orchestrator | Monday 09 February 2026 03:14:59 +0000 (0:00:01.275) 0:02:02.497 ******* 2026-02-09 03:15:06.444429 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:15:06.444436 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:15:06.444444 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:15:06.444451 | orchestrator | 2026-02-09 03:15:06.444459 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-09 03:15:06.444493 | orchestrator | Monday 09 February 2026 03:15:01 +0000 (0:00:02.089) 0:02:04.587 ******* 2026-02-09 03:15:06.444501 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:06.444509 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:06.444516 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:06.444523 | orchestrator | 2026-02-09 03:15:06.444532 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-09 03:15:06.444540 | orchestrator | Monday 09 February 2026 03:15:02 +0000 (0:00:00.313) 0:02:04.900 ******* 2026-02-09 03:15:06.444548 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:15:06.444556 | orchestrator | 2026-02-09 03:15:06.444564 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-09 03:15:06.444572 | orchestrator | Monday 09 February 2026 03:15:03 +0000 (0:00:01.099) 0:02:06.000 ******* 2026-02-09 03:15:06.444601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 03:15:06.444624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 03:15:06.444641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 03:15:08.231656 | orchestrator | 2026-02-09 03:15:08.231768 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-09 03:15:08.231790 | orchestrator | Monday 09 February 2026 03:15:06 +0000 (0:00:03.254) 0:02:09.255 ******* 2026-02-09 03:15:08.231833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 03:15:08.231859 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:08.231988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 03:15:08.232031 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:08.232057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 03:15:08.232073 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:08.232088 | orchestrator | 2026-02-09 03:15:08.232104 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-09 03:15:08.232119 | orchestrator | Monday 09 February 2026 03:15:07 +0000 (0:00:00.685) 0:02:09.940 ******* 2026-02-09 03:15:08.232135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-09 03:15:08.232162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 03:15:08.232209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-09 03:15:08.232237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 03:15:16.938436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-09 03:15:16.938555 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:16.938576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-09 03:15:16.938592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 03:15:16.938624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-09 03:15:16.938638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 03:15:16.938657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-09 03:15:16.938676 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:16.938695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-09 03:15:16.938713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 03:15:16.938731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-09 03:15:16.938778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 03:15:16.938796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-09 03:15:16.938813 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:16.938831 | orchestrator | 2026-02-09 03:15:16.938851 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-09 03:15:16.938871 | orchestrator | Monday 09 February 2026 03:15:08 +0000 (0:00:01.098) 0:02:11.039 ******* 2026-02-09 03:15:16.938888 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:15:16.938899 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:15:16.938910 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:15:16.938920 | orchestrator | 2026-02-09 03:15:16.938931 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-09 03:15:16.938942 | orchestrator | Monday 09 February 2026 03:15:09 +0000 (0:00:01.610) 0:02:12.650 ******* 2026-02-09 03:15:16.938953 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:15:16.938964 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:15:16.938975 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:15:16.938986 | orchestrator | 2026-02-09 03:15:16.938996 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-09 03:15:16.939007 | orchestrator | Monday 09 February 2026 03:15:11 +0000 (0:00:01.990) 0:02:14.640 ******* 2026-02-09 03:15:16.939018 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:16.939029 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:16.939059 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:16.939071 | orchestrator | 2026-02-09 03:15:16.939081 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-09 03:15:16.939092 | orchestrator | Monday 09 February 2026 03:15:12 +0000 (0:00:00.330) 0:02:14.970 ******* 2026-02-09 03:15:16.939103 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:16.939114 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:16.939124 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:16.939135 | orchestrator | 2026-02-09 03:15:16.939146 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-09 03:15:16.939156 | orchestrator | Monday 09 February 2026 03:15:12 +0000 (0:00:00.311) 0:02:15.282 ******* 2026-02-09 03:15:16.939167 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:15:16.939178 | orchestrator | 2026-02-09 03:15:16.939226 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-09 03:15:16.939244 | orchestrator | Monday 09 February 2026 03:15:13 +0000 (0:00:01.235) 0:02:16.517 ******* 2026-02-09 03:15:16.939280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:15:16.939316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:15:16.939329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:15:16.939342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:15:16.939461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:15:17.549329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:15:17.549455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:15:17.549472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:15:17.549486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:15:17.549498 | orchestrator | 2026-02-09 03:15:17.549511 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-09 03:15:17.549524 | orchestrator | Monday 09 February 2026 03:15:16 +0000 (0:00:03.230) 0:02:19.747 ******* 2026-02-09 03:15:17.549538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:15:17.549577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:15:17.549591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:15:17.549610 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:17.549624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:15:17.549636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:15:17.549648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:15:17.549659 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:17.549684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:15:26.867345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:15:26.867454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:15:26.867471 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:26.867486 | orchestrator | 2026-02-09 03:15:26.867498 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-09 03:15:26.867511 | orchestrator | Monday 09 February 2026 03:15:17 +0000 (0:00:00.610) 0:02:20.358 ******* 2026-02-09 03:15:26.867524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-09 03:15:26.867538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-09 03:15:26.867551 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:26.867563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-09 03:15:26.867575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-09 03:15:26.867586 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:26.867598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-09 03:15:26.867609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-09 03:15:26.867620 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:26.867631 | orchestrator | 2026-02-09 03:15:26.867642 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-09 03:15:26.867653 | orchestrator | Monday 09 February 2026 03:15:18 +0000 (0:00:01.065) 0:02:21.424 ******* 2026-02-09 03:15:26.867664 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:15:26.867675 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:15:26.867708 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:15:26.867719 | orchestrator | 2026-02-09 03:15:26.867730 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-09 03:15:26.867741 | orchestrator | Monday 09 February 2026 03:15:19 +0000 (0:00:01.288) 0:02:22.712 ******* 2026-02-09 03:15:26.867752 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:15:26.867763 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:15:26.867773 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:15:26.867784 | orchestrator | 2026-02-09 03:15:26.867796 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-09 03:15:26.867809 | orchestrator | Monday 09 February 2026 03:15:21 +0000 (0:00:02.033) 0:02:24.746 ******* 2026-02-09 03:15:26.867822 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:26.867848 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:26.867862 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:26.867875 | orchestrator | 2026-02-09 03:15:26.867888 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-09 03:15:26.867919 | orchestrator | Monday 09 February 2026 03:15:22 +0000 (0:00:00.339) 0:02:25.085 ******* 2026-02-09 03:15:26.867933 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:15:26.867946 | orchestrator | 2026-02-09 03:15:26.867959 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-09 03:15:26.867972 | orchestrator | Monday 09 February 2026 03:15:23 +0000 (0:00:01.252) 0:02:26.338 ******* 2026-02-09 03:15:26.867986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 03:15:26.868005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:15:26.868020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 03:15:26.868042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:15:26.868066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 03:15:32.169328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:15:32.169443 | orchestrator | 2026-02-09 03:15:32.169462 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-09 03:15:32.169475 | orchestrator | Monday 09 February 2026 03:15:26 +0000 (0:00:03.333) 0:02:29.671 ******* 2026-02-09 03:15:32.169489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 03:15:32.169545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:15:32.169579 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:32.169599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 03:15:32.169633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:15:32.169646 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:32.169660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 03:15:32.169680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:15:32.169710 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:32.169727 | orchestrator | 2026-02-09 03:15:32.169746 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-09 03:15:32.169764 | orchestrator | Monday 09 February 2026 03:15:27 +0000 (0:00:00.641) 0:02:30.313 ******* 2026-02-09 03:15:32.169784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-09 03:15:32.169804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-09 03:15:32.169825 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:32.169844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-09 03:15:32.169862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-09 03:15:32.169883 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:32.169901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-09 03:15:32.169920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-09 03:15:32.169938 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:32.169956 | orchestrator | 2026-02-09 03:15:32.169985 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-09 03:15:32.170003 | orchestrator | Monday 09 February 2026 03:15:28 +0000 (0:00:00.954) 0:02:31.268 ******* 2026-02-09 03:15:32.170113 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:15:32.170126 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:15:32.170137 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:15:32.170147 | orchestrator | 2026-02-09 03:15:32.170158 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-09 03:15:32.170169 | orchestrator | Monday 09 February 2026 03:15:30 +0000 (0:00:01.637) 0:02:32.906 ******* 2026-02-09 03:15:32.170180 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:15:32.170259 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:15:32.170277 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:15:32.170288 | orchestrator | 2026-02-09 03:15:32.170299 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-09 03:15:32.170323 | orchestrator | Monday 09 February 2026 03:15:32 +0000 (0:00:02.070) 0:02:34.976 ******* 2026-02-09 03:15:36.740186 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:15:36.740371 | orchestrator | 2026-02-09 03:15:36.740410 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-09 03:15:36.740427 | orchestrator | Monday 09 February 2026 03:15:33 +0000 (0:00:01.067) 0:02:36.044 ******* 2026-02-09 03:15:36.740442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 03:15:36.740480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:15:36.740495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 03:15:36.740514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 03:15:36.740549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 03:15:36.740593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:15:36.740614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 03:15:36.740639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 03:15:36.740651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 03:15:36.740710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:15:36.740728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 03:15:36.740750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 03:15:37.778482 | orchestrator | 2026-02-09 03:15:37.778609 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-09 03:15:37.778635 | orchestrator | Monday 09 February 2026 03:15:36 +0000 (0:00:03.593) 0:02:39.637 ******* 2026-02-09 03:15:37.778686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 03:15:37.778710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:15:37.778727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 03:15:37.778743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 03:15:37.778759 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:37.778795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 03:15:37.778837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:15:37.778865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 03:15:37.778880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 03:15:37.778896 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:37.778912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 03:15:37.778927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:15:37.778949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 03:15:37.778975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 03:15:48.967507 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:48.967618 | orchestrator | 2026-02-09 03:15:48.967633 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-09 03:15:48.967645 | orchestrator | Monday 09 February 2026 03:15:37 +0000 (0:00:01.047) 0:02:40.685 ******* 2026-02-09 03:15:48.967657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-09 03:15:48.967669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-09 03:15:48.967681 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:48.967692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-09 03:15:48.967702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-09 03:15:48.967712 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:48.967721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-09 03:15:48.967731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-09 03:15:48.967741 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:48.967751 | orchestrator | 2026-02-09 03:15:48.967760 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-09 03:15:48.967770 | orchestrator | Monday 09 February 2026 03:15:38 +0000 (0:00:00.868) 0:02:41.554 ******* 2026-02-09 03:15:48.967780 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:15:48.967789 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:15:48.967799 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:15:48.967808 | orchestrator | 2026-02-09 03:15:48.967818 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-09 03:15:48.967827 | orchestrator | Monday 09 February 2026 03:15:40 +0000 (0:00:01.298) 0:02:42.853 ******* 2026-02-09 03:15:48.967842 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:15:48.967857 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:15:48.967871 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:15:48.967885 | orchestrator | 2026-02-09 03:15:48.967899 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-09 03:15:48.967912 | orchestrator | Monday 09 February 2026 03:15:42 +0000 (0:00:02.169) 0:02:45.022 ******* 2026-02-09 03:15:48.967926 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:15:48.967940 | orchestrator | 2026-02-09 03:15:48.967954 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-09 03:15:48.967969 | orchestrator | Monday 09 February 2026 03:15:43 +0000 (0:00:01.403) 0:02:46.425 ******* 2026-02-09 03:15:48.967984 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 03:15:48.967998 | orchestrator | 2026-02-09 03:15:48.968011 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-09 03:15:48.968054 | orchestrator | Monday 09 February 2026 03:15:46 +0000 (0:00:02.984) 0:02:49.410 ******* 2026-02-09 03:15:48.968123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:15:48.968149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 03:15:48.968166 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:48.968192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:15:48.968250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 03:15:48.968268 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:48.968300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:15:51.333025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 03:15:51.333143 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:15:51.333165 | orchestrator | 2026-02-09 03:15:51.333181 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-09 03:15:51.333195 | orchestrator | Monday 09 February 2026 03:15:48 +0000 (0:00:02.363) 0:02:51.773 ******* 2026-02-09 03:15:51.333318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:15:51.333338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 03:15:51.333351 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:15:51.333394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:15:51.333444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 03:15:51.333462 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:15:51.333476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:15:51.333501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 03:16:01.498428 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:01.498542 | orchestrator | 2026-02-09 03:16:01.498560 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-09 03:16:01.498574 | orchestrator | Monday 09 February 2026 03:15:51 +0000 (0:00:02.362) 0:02:54.136 ******* 2026-02-09 03:16:01.498588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 03:16:01.498628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 03:16:01.498664 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:01.498677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 03:16:01.498689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 03:16:01.498700 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:01.498712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 03:16:01.498723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 03:16:01.498734 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:01.498745 | orchestrator | 2026-02-09 03:16:01.498756 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-09 03:16:01.498767 | orchestrator | Monday 09 February 2026 03:15:54 +0000 (0:00:02.688) 0:02:56.824 ******* 2026-02-09 03:16:01.498778 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:16:01.498829 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:16:01.498849 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:16:01.498867 | orchestrator | 2026-02-09 03:16:01.498888 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-09 03:16:01.498907 | orchestrator | Monday 09 February 2026 03:15:56 +0000 (0:00:02.203) 0:02:59.027 ******* 2026-02-09 03:16:01.498926 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:01.498940 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:01.498953 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:01.498965 | orchestrator | 2026-02-09 03:16:01.498979 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-09 03:16:01.498993 | orchestrator | Monday 09 February 2026 03:15:58 +0000 (0:00:01.877) 0:03:00.905 ******* 2026-02-09 03:16:01.499005 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:01.499018 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:01.499031 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:01.499044 | orchestrator | 2026-02-09 03:16:01.499057 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-09 03:16:01.499070 | orchestrator | Monday 09 February 2026 03:15:58 +0000 (0:00:00.329) 0:03:01.234 ******* 2026-02-09 03:16:01.499083 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:16:01.499097 | orchestrator | 2026-02-09 03:16:01.499111 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-09 03:16:01.499124 | orchestrator | Monday 09 February 2026 03:15:59 +0000 (0:00:01.369) 0:03:02.603 ******* 2026-02-09 03:16:01.499146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-09 03:16:01.499186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-09 03:16:01.499212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-09 03:16:01.499256 | orchestrator | 2026-02-09 03:16:01.499270 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-09 03:16:01.499292 | orchestrator | Monday 09 February 2026 03:16:01 +0000 (0:00:01.509) 0:03:04.113 ******* 2026-02-09 03:16:01.499314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-09 03:16:10.082573 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:10.082660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-09 03:16:10.082671 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:10.082677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-09 03:16:10.082683 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:10.082689 | orchestrator | 2026-02-09 03:16:10.082696 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-09 03:16:10.082708 | orchestrator | Monday 09 February 2026 03:16:01 +0000 (0:00:00.392) 0:03:04.506 ******* 2026-02-09 03:16:10.082719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-09 03:16:10.082730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-09 03:16:10.082739 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:10.082749 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:10.082758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-09 03:16:10.082787 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:10.082795 | orchestrator | 2026-02-09 03:16:10.082832 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-09 03:16:10.082838 | orchestrator | Monday 09 February 2026 03:16:02 +0000 (0:00:00.899) 0:03:05.405 ******* 2026-02-09 03:16:10.082844 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:10.082849 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:10.082855 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:10.082860 | orchestrator | 2026-02-09 03:16:10.082866 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-09 03:16:10.082871 | orchestrator | Monday 09 February 2026 03:16:03 +0000 (0:00:00.445) 0:03:05.851 ******* 2026-02-09 03:16:10.082877 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:10.082882 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:10.082887 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:10.082893 | orchestrator | 2026-02-09 03:16:10.082898 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-09 03:16:10.082904 | orchestrator | Monday 09 February 2026 03:16:04 +0000 (0:00:01.295) 0:03:07.147 ******* 2026-02-09 03:16:10.082909 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:10.082915 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:10.082920 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:10.082926 | orchestrator | 2026-02-09 03:16:10.082931 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-09 03:16:10.082936 | orchestrator | Monday 09 February 2026 03:16:04 +0000 (0:00:00.336) 0:03:07.483 ******* 2026-02-09 03:16:10.082941 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:16:10.082947 | orchestrator | 2026-02-09 03:16:10.082952 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-09 03:16:10.082958 | orchestrator | Monday 09 February 2026 03:16:06 +0000 (0:00:01.484) 0:03:08.967 ******* 2026-02-09 03:16:10.082977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:16:10.082989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.082996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.083009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.083016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:16:10.083027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-09 03:16:10.199109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.199299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.199342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.199357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:10.199373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.199408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-09 03:16:10.199447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:10.199463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.199486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.199501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:10.199516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:10.199531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:10.199555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.315112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.315194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-09 03:16:10.315202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:10.315208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:10.315212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.315217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.315274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-09 03:16:10.315285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 03:16:10.315289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:10.315294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:10.315298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.315303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:16:10.315313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 03:16:10.644396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.644530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:10.644559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.644581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.644739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-09 03:16:10.644830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.644860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:10.644883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:10.644902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.644923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:10.644950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:10.644981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-09 03:16:10.645012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:11.754691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:11.754820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 03:16:11.754854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:11.754876 | orchestrator | 2026-02-09 03:16:11.754898 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-09 03:16:11.754951 | orchestrator | Monday 09 February 2026 03:16:10 +0000 (0:00:04.477) 0:03:13.445 ******* 2026-02-09 03:16:11.754992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:16:11.755039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:11.755061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:11.755080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:11.755098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-09 03:16:11.755137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:11.755159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:11.755181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:11.755212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:11.866584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:11.866687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:16:11.866729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:11.866757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:11.866771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-09 03:16:11.866804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:11.866818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:11.866829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:11.866848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:11.866865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-09 03:16:11.866878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 03:16:11.866900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:12.160919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:12.161037 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:12.161052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:12.161065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:12.161092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:12.161109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:12.161124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:12.161162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-09 03:16:12.161192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:12.161208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:12.161306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 03:16:12.161330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:12.161346 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:12.161363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:16:12.161447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:12.412872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:12.412955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:12.412962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-09 03:16:12.412969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:12.412974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:12.412995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:12.413012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:12.413020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:12.413024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:12.413029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-09 03:16:12.413034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-09 03:16:12.413039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 03:16:12.413051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 03:16:23.014819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:16:23.014920 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:23.014928 | orchestrator | 2026-02-09 03:16:23.014933 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-09 03:16:23.014939 | orchestrator | Monday 09 February 2026 03:16:12 +0000 (0:00:01.780) 0:03:15.226 ******* 2026-02-09 03:16:23.014944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-09 03:16:23.014950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-09 03:16:23.014955 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:23.014959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-09 03:16:23.014963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-09 03:16:23.014967 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:23.014971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-09 03:16:23.014975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-09 03:16:23.014996 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:23.015001 | orchestrator | 2026-02-09 03:16:23.015005 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-09 03:16:23.015009 | orchestrator | Monday 09 February 2026 03:16:14 +0000 (0:00:02.074) 0:03:17.300 ******* 2026-02-09 03:16:23.015012 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:16:23.015016 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:16:23.015020 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:16:23.015024 | orchestrator | 2026-02-09 03:16:23.015028 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-09 03:16:23.015032 | orchestrator | Monday 09 February 2026 03:16:15 +0000 (0:00:01.329) 0:03:18.629 ******* 2026-02-09 03:16:23.015037 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:16:23.015097 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:16:23.015102 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:16:23.015106 | orchestrator | 2026-02-09 03:16:23.015110 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-09 03:16:23.015114 | orchestrator | Monday 09 February 2026 03:16:17 +0000 (0:00:02.104) 0:03:20.734 ******* 2026-02-09 03:16:23.015118 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:16:23.015122 | orchestrator | 2026-02-09 03:16:23.015126 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-09 03:16:23.015130 | orchestrator | Monday 09 February 2026 03:16:19 +0000 (0:00:01.196) 0:03:21.930 ******* 2026-02-09 03:16:23.015136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:16:23.015160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:16:23.015164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:16:23.015173 | orchestrator | 2026-02-09 03:16:23.015179 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-09 03:16:23.015185 | orchestrator | Monday 09 February 2026 03:16:22 +0000 (0:00:03.374) 0:03:25.304 ******* 2026-02-09 03:16:23.015192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:16:23.015196 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:23.015200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:16:23.015204 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:23.015215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:16:33.058529 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:33.058699 | orchestrator | 2026-02-09 03:16:33.058727 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-09 03:16:33.058748 | orchestrator | Monday 09 February 2026 03:16:22 +0000 (0:00:00.517) 0:03:25.821 ******* 2026-02-09 03:16:33.058761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-09 03:16:33.058806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-09 03:16:33.058820 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:33.058830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-09 03:16:33.058841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-09 03:16:33.058851 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:33.058860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-09 03:16:33.058870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-09 03:16:33.058880 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:33.058889 | orchestrator | 2026-02-09 03:16:33.058899 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-09 03:16:33.058909 | orchestrator | Monday 09 February 2026 03:16:23 +0000 (0:00:00.821) 0:03:26.643 ******* 2026-02-09 03:16:33.058919 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:16:33.058929 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:16:33.058938 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:16:33.058948 | orchestrator | 2026-02-09 03:16:33.058958 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-09 03:16:33.058967 | orchestrator | Monday 09 February 2026 03:16:25 +0000 (0:00:01.631) 0:03:28.274 ******* 2026-02-09 03:16:33.058977 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:16:33.058987 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:16:33.058996 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:16:33.059006 | orchestrator | 2026-02-09 03:16:33.059015 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-09 03:16:33.059027 | orchestrator | Monday 09 February 2026 03:16:27 +0000 (0:00:02.114) 0:03:30.389 ******* 2026-02-09 03:16:33.059040 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:16:33.059051 | orchestrator | 2026-02-09 03:16:33.059063 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-09 03:16:33.059074 | orchestrator | Monday 09 February 2026 03:16:28 +0000 (0:00:01.259) 0:03:31.649 ******* 2026-02-09 03:16:33.059097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 03:16:33.059189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 03:16:33.059217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:16:33.059238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:16:33.059301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:16:33.059330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:16:33.059366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 03:16:34.122158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:16:34.122356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:16:34.122384 | orchestrator | 2026-02-09 03:16:34.122406 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-09 03:16:34.122426 | orchestrator | Monday 09 February 2026 03:16:33 +0000 (0:00:04.211) 0:03:35.860 ******* 2026-02-09 03:16:34.122450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 03:16:34.122510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:16:34.122552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:16:34.122576 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:34.122629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 03:16:34.122649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:16:34.122663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:16:34.122676 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:34.122702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 03:16:34.122755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 03:16:46.485540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 03:16:46.485653 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:46.485662 | orchestrator | 2026-02-09 03:16:46.485668 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-09 03:16:46.485674 | orchestrator | Monday 09 February 2026 03:16:34 +0000 (0:00:01.062) 0:03:36.922 ******* 2026-02-09 03:16:46.485681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485711 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:46.485715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485753 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:46.485757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-09 03:16:46.485789 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:16:46.485793 | orchestrator | 2026-02-09 03:16:46.485798 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-09 03:16:46.485802 | orchestrator | Monday 09 February 2026 03:16:35 +0000 (0:00:00.956) 0:03:37.879 ******* 2026-02-09 03:16:46.485807 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:16:46.485811 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:16:46.485815 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:16:46.485819 | orchestrator | 2026-02-09 03:16:46.485824 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-09 03:16:46.485828 | orchestrator | Monday 09 February 2026 03:16:36 +0000 (0:00:01.438) 0:03:39.318 ******* 2026-02-09 03:16:46.485832 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:16:46.485837 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:16:46.485854 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:16:46.485859 | orchestrator | 2026-02-09 03:16:46.485863 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-09 03:16:46.485868 | orchestrator | Monday 09 February 2026 03:16:38 +0000 (0:00:02.115) 0:03:41.433 ******* 2026-02-09 03:16:46.485872 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:16:46.485876 | orchestrator | 2026-02-09 03:16:46.485881 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-09 03:16:46.485885 | orchestrator | Monday 09 February 2026 03:16:40 +0000 (0:00:01.608) 0:03:43.042 ******* 2026-02-09 03:16:46.485890 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-09 03:16:46.485896 | orchestrator | 2026-02-09 03:16:46.485900 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-09 03:16:46.485905 | orchestrator | Monday 09 February 2026 03:16:41 +0000 (0:00:00.842) 0:03:43.884 ******* 2026-02-09 03:16:46.485910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-09 03:16:46.485922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-09 03:16:46.485927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-09 03:16:46.485932 | orchestrator | 2026-02-09 03:16:46.485937 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-09 03:16:46.485943 | orchestrator | Monday 09 February 2026 03:16:45 +0000 (0:00:04.378) 0:03:48.262 ******* 2026-02-09 03:16:46.485948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 03:16:46.485953 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:16:46.485960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 03:16:46.485965 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:16:46.485969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 03:16:46.485977 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:07.218427 | orchestrator | 2026-02-09 03:17:07.218530 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-09 03:17:07.218542 | orchestrator | Monday 09 February 2026 03:16:46 +0000 (0:00:01.029) 0:03:49.292 ******* 2026-02-09 03:17:07.218550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 03:17:07.218561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 03:17:07.218602 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:07.218611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 03:17:07.218617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 03:17:07.218624 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:07.218630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 03:17:07.218635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 03:17:07.218642 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:07.218647 | orchestrator | 2026-02-09 03:17:07.218654 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-09 03:17:07.218660 | orchestrator | Monday 09 February 2026 03:16:48 +0000 (0:00:01.556) 0:03:50.848 ******* 2026-02-09 03:17:07.218666 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:17:07.218671 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:17:07.218677 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:17:07.218683 | orchestrator | 2026-02-09 03:17:07.218689 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-09 03:17:07.218695 | orchestrator | Monday 09 February 2026 03:16:50 +0000 (0:00:02.495) 0:03:53.343 ******* 2026-02-09 03:17:07.218700 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:17:07.218706 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:17:07.218712 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:17:07.218718 | orchestrator | 2026-02-09 03:17:07.218724 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-09 03:17:07.218730 | orchestrator | Monday 09 February 2026 03:16:53 +0000 (0:00:03.079) 0:03:56.422 ******* 2026-02-09 03:17:07.218737 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-09 03:17:07.218745 | orchestrator | 2026-02-09 03:17:07.218751 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-09 03:17:07.218757 | orchestrator | Monday 09 February 2026 03:16:55 +0000 (0:00:01.403) 0:03:57.825 ******* 2026-02-09 03:17:07.218781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 03:17:07.218790 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:07.218796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 03:17:07.218809 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:07.218833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 03:17:07.218840 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:07.218846 | orchestrator | 2026-02-09 03:17:07.218851 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-09 03:17:07.218858 | orchestrator | Monday 09 February 2026 03:16:56 +0000 (0:00:01.370) 0:03:59.196 ******* 2026-02-09 03:17:07.218864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 03:17:07.218870 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:07.218876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 03:17:07.218882 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:07.218888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 03:17:07.218894 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:07.218900 | orchestrator | 2026-02-09 03:17:07.218906 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-09 03:17:07.218911 | orchestrator | Monday 09 February 2026 03:16:57 +0000 (0:00:01.325) 0:04:00.521 ******* 2026-02-09 03:17:07.218918 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:07.218923 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:07.218929 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:07.218935 | orchestrator | 2026-02-09 03:17:07.218941 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-09 03:17:07.218947 | orchestrator | Monday 09 February 2026 03:16:59 +0000 (0:00:01.801) 0:04:02.323 ******* 2026-02-09 03:17:07.218953 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:17:07.218960 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:17:07.218967 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:17:07.218979 | orchestrator | 2026-02-09 03:17:07.218991 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-09 03:17:07.219003 | orchestrator | Monday 09 February 2026 03:17:01 +0000 (0:00:02.439) 0:04:04.762 ******* 2026-02-09 03:17:07.219014 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:17:07.219020 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:17:07.219026 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:17:07.219032 | orchestrator | 2026-02-09 03:17:07.219042 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-09 03:17:07.219048 | orchestrator | Monday 09 February 2026 03:17:05 +0000 (0:00:03.075) 0:04:07.837 ******* 2026-02-09 03:17:07.219055 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-09 03:17:07.219061 | orchestrator | 2026-02-09 03:17:07.219068 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-09 03:17:07.219074 | orchestrator | Monday 09 February 2026 03:17:05 +0000 (0:00:00.888) 0:04:08.726 ******* 2026-02-09 03:17:07.219088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 03:17:20.841559 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:20.841665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 03:17:20.841688 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:20.841704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 03:17:20.841720 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:20.841735 | orchestrator | 2026-02-09 03:17:20.841751 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-09 03:17:20.841767 | orchestrator | Monday 09 February 2026 03:17:07 +0000 (0:00:01.300) 0:04:10.027 ******* 2026-02-09 03:17:20.841783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 03:17:20.841799 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:20.841815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 03:17:20.841860 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:20.841876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 03:17:20.841890 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:20.841905 | orchestrator | 2026-02-09 03:17:20.841938 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-09 03:17:20.841955 | orchestrator | Monday 09 February 2026 03:17:08 +0000 (0:00:01.425) 0:04:11.452 ******* 2026-02-09 03:17:20.841970 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:20.841985 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:20.841999 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:20.842089 | orchestrator | 2026-02-09 03:17:20.842102 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-09 03:17:20.842113 | orchestrator | Monday 09 February 2026 03:17:10 +0000 (0:00:01.559) 0:04:13.012 ******* 2026-02-09 03:17:20.842123 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:17:20.842135 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:17:20.842145 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:17:20.842155 | orchestrator | 2026-02-09 03:17:20.842166 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-09 03:17:20.842176 | orchestrator | Monday 09 February 2026 03:17:12 +0000 (0:00:02.440) 0:04:15.453 ******* 2026-02-09 03:17:20.842186 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:17:20.842197 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:17:20.842207 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:17:20.842218 | orchestrator | 2026-02-09 03:17:20.842228 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-09 03:17:20.842238 | orchestrator | Monday 09 February 2026 03:17:15 +0000 (0:00:03.254) 0:04:18.707 ******* 2026-02-09 03:17:20.842265 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:17:20.842275 | orchestrator | 2026-02-09 03:17:20.842310 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-09 03:17:20.842319 | orchestrator | Monday 09 February 2026 03:17:17 +0000 (0:00:01.686) 0:04:20.394 ******* 2026-02-09 03:17:20.842330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 03:17:20.842341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 03:17:20.842363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 03:17:20.842374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 03:17:20.842391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:17:20.842409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 03:17:21.603401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 03:17:21.603508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 03:17:21.603550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 03:17:21.603565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:17:21.603577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 03:17:21.603589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 03:17:21.603620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 03:17:21.603632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 03:17:21.603692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:17:21.603707 | orchestrator | 2026-02-09 03:17:21.603720 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-09 03:17:21.603732 | orchestrator | Monday 09 February 2026 03:17:20 +0000 (0:00:03.409) 0:04:23.803 ******* 2026-02-09 03:17:21.603750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 03:17:21.603763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 03:17:21.603774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 03:17:21.603795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 03:17:22.990561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:17:22.990650 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:22.990660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 03:17:22.990667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 03:17:22.990684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 03:17:22.990689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 03:17:22.990694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:17:22.990703 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:22.990722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 03:17:22.990728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 03:17:22.990733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 03:17:22.990742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 03:17:22.990747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 03:17:22.990753 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:22.990758 | orchestrator | 2026-02-09 03:17:22.990775 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-09 03:17:22.990782 | orchestrator | Monday 09 February 2026 03:17:21 +0000 (0:00:00.759) 0:04:24.562 ******* 2026-02-09 03:17:22.990788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 03:17:22.990803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 03:17:22.990810 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:22.990820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 03:17:34.223409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 03:17:34.223554 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:34.223573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 03:17:34.223585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 03:17:34.223595 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:34.223605 | orchestrator | 2026-02-09 03:17:34.223617 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-09 03:17:34.223629 | orchestrator | Monday 09 February 2026 03:17:22 +0000 (0:00:01.227) 0:04:25.790 ******* 2026-02-09 03:17:34.223638 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:17:34.223647 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:17:34.223656 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:17:34.223665 | orchestrator | 2026-02-09 03:17:34.223675 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-09 03:17:34.223684 | orchestrator | Monday 09 February 2026 03:17:24 +0000 (0:00:01.410) 0:04:27.201 ******* 2026-02-09 03:17:34.223693 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:17:34.223702 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:17:34.223712 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:17:34.223721 | orchestrator | 2026-02-09 03:17:34.223730 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-09 03:17:34.223739 | orchestrator | Monday 09 February 2026 03:17:26 +0000 (0:00:02.169) 0:04:29.371 ******* 2026-02-09 03:17:34.223748 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:17:34.223758 | orchestrator | 2026-02-09 03:17:34.223766 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-09 03:17:34.223775 | orchestrator | Monday 09 February 2026 03:17:27 +0000 (0:00:01.430) 0:04:30.801 ******* 2026-02-09 03:17:34.223811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:17:34.223830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:17:34.223886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:17:34.223900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:17:34.223917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:17:34.223929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:17:34.223947 | orchestrator | 2026-02-09 03:17:34.223956 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-09 03:17:34.223965 | orchestrator | Monday 09 February 2026 03:17:33 +0000 (0:00:05.564) 0:04:36.365 ******* 2026-02-09 03:17:34.223983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-09 03:17:38.819824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-09 03:17:38.819943 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:38.819976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-09 03:17:38.819990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-09 03:17:38.820042 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:38.820053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-09 03:17:38.820080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-09 03:17:38.820091 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:38.820101 | orchestrator | 2026-02-09 03:17:38.820111 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-09 03:17:38.820122 | orchestrator | Monday 09 February 2026 03:17:34 +0000 (0:00:00.666) 0:04:37.032 ******* 2026-02-09 03:17:38.820132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-09 03:17:38.820143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-09 03:17:38.820156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-09 03:17:38.820175 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:38.820190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-09 03:17:38.820199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-09 03:17:38.820209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-09 03:17:38.820218 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:38.820226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-09 03:17:38.820235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-09 03:17:38.820244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-09 03:17:38.820253 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:38.820262 | orchestrator | 2026-02-09 03:17:38.820270 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-09 03:17:38.820279 | orchestrator | Monday 09 February 2026 03:17:35 +0000 (0:00:00.981) 0:04:38.013 ******* 2026-02-09 03:17:38.820288 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:38.820326 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:38.820341 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:38.820356 | orchestrator | 2026-02-09 03:17:38.820373 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-09 03:17:38.820388 | orchestrator | Monday 09 February 2026 03:17:35 +0000 (0:00:00.436) 0:04:38.450 ******* 2026-02-09 03:17:38.820404 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:38.820415 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:38.820426 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:38.820436 | orchestrator | 2026-02-09 03:17:38.820446 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-09 03:17:38.820456 | orchestrator | Monday 09 February 2026 03:17:37 +0000 (0:00:01.443) 0:04:39.894 ******* 2026-02-09 03:17:38.820475 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:17:41.252224 | orchestrator | 2026-02-09 03:17:41.252378 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-09 03:17:41.252394 | orchestrator | Monday 09 February 2026 03:17:38 +0000 (0:00:01.733) 0:04:41.627 ******* 2026-02-09 03:17:41.252406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-09 03:17:41.252439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 03:17:41.252460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:41.252469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:41.252478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 03:17:41.252487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-09 03:17:41.252510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-09 03:17:41.252518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 03:17:41.252532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:41.252544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 03:17:41.252552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:41.252559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:41.252567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 03:17:41.252581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:42.871610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 03:17:42.871710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-09 03:17:42.871737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-09 03:17:42.871750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:42.871761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:42.871773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 03:17:42.871802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-09 03:17:42.871827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-09 03:17:42.871845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-09 03:17:42.871859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:42.871873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-09 03:17:43.554140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:43.554227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:43.554240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:43.554264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 03:17:43.554275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 03:17:43.554283 | orchestrator | 2026-02-09 03:17:43.554293 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-09 03:17:43.554367 | orchestrator | Monday 09 February 2026 03:17:42 +0000 (0:00:04.187) 0:04:45.814 ******* 2026-02-09 03:17:43.554377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-09 03:17:43.554386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 03:17:43.554429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:43.554437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:43.554446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 03:17:43.554460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-09 03:17:43.554470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-09 03:17:43.554485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-09 03:17:44.099705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 03:17:44.099784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:44.099806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:44.099813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:44.099819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:44.099826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 03:17:44.099833 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:44.099841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 03:17:44.099879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-09 03:17:44.099887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-09 03:17:44.099897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:44.099903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:44.099908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 03:17:44.099914 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:44.099921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-09 03:17:44.099940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 03:17:45.636259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:45.636399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:45.636427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 03:17:45.636438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-09 03:17:45.636447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-09 03:17:45.636479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:45.636503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 03:17:45.636510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 03:17:45.636518 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:45.636527 | orchestrator | 2026-02-09 03:17:45.636535 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-09 03:17:45.636542 | orchestrator | Monday 09 February 2026 03:17:44 +0000 (0:00:01.549) 0:04:47.363 ******* 2026-02-09 03:17:45.636554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-09 03:17:45.636564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-09 03:17:45.636573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-09 03:17:45.636583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-09 03:17:45.636591 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:45.636600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-09 03:17:45.636618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-09 03:17:45.636629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-09 03:17:45.636640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-09 03:17:45.636651 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:45.636662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-09 03:17:45.636672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-09 03:17:45.636684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-09 03:17:45.636704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-09 03:17:53.590272 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:53.590409 | orchestrator | 2026-02-09 03:17:53.590419 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-09 03:17:53.590426 | orchestrator | Monday 09 February 2026 03:17:45 +0000 (0:00:01.069) 0:04:48.433 ******* 2026-02-09 03:17:53.590431 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:53.590436 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:53.590441 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:53.590445 | orchestrator | 2026-02-09 03:17:53.590450 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-09 03:17:53.590455 | orchestrator | Monday 09 February 2026 03:17:46 +0000 (0:00:00.477) 0:04:48.911 ******* 2026-02-09 03:17:53.590460 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:53.590465 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:53.590469 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:53.590474 | orchestrator | 2026-02-09 03:17:53.590478 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-09 03:17:53.590483 | orchestrator | Monday 09 February 2026 03:17:47 +0000 (0:00:01.399) 0:04:50.310 ******* 2026-02-09 03:17:53.590487 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:17:53.590492 | orchestrator | 2026-02-09 03:17:53.590496 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-09 03:17:53.590501 | orchestrator | Monday 09 February 2026 03:17:49 +0000 (0:00:01.772) 0:04:52.083 ******* 2026-02-09 03:17:53.590509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:17:53.590532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:17:53.590567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:17:53.590573 | orchestrator | 2026-02-09 03:17:53.590578 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-09 03:17:53.590596 | orchestrator | Monday 09 February 2026 03:17:51 +0000 (0:00:02.385) 0:04:54.468 ******* 2026-02-09 03:17:53.590607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 03:17:53.590621 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:53.590629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 03:17:53.590636 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:53.590644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 03:17:53.590651 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:53.590658 | orchestrator | 2026-02-09 03:17:53.590665 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-09 03:17:53.590672 | orchestrator | Monday 09 February 2026 03:17:52 +0000 (0:00:00.439) 0:04:54.908 ******* 2026-02-09 03:17:53.590680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-09 03:17:53.590689 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:17:53.590697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-09 03:17:53.590704 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:17:53.590711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-09 03:17:53.590719 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:17:53.590727 | orchestrator | 2026-02-09 03:17:53.590734 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-09 03:17:53.590742 | orchestrator | Monday 09 February 2026 03:17:53 +0000 (0:00:01.007) 0:04:55.915 ******* 2026-02-09 03:17:53.590754 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:03.576299 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:03.576439 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:03.576453 | orchestrator | 2026-02-09 03:18:03.576465 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-09 03:18:03.576476 | orchestrator | Monday 09 February 2026 03:17:53 +0000 (0:00:00.487) 0:04:56.403 ******* 2026-02-09 03:18:03.576486 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:03.576521 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:03.576532 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:03.576542 | orchestrator | 2026-02-09 03:18:03.576552 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-09 03:18:03.576561 | orchestrator | Monday 09 February 2026 03:17:54 +0000 (0:00:01.341) 0:04:57.745 ******* 2026-02-09 03:18:03.576571 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:18:03.576582 | orchestrator | 2026-02-09 03:18:03.576592 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-09 03:18:03.576602 | orchestrator | Monday 09 February 2026 03:17:56 +0000 (0:00:01.535) 0:04:59.280 ******* 2026-02-09 03:18:03.576627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 03:18:03.576644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 03:18:03.576655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 03:18:03.576685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 03:18:03.576718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 03:18:03.576730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 03:18:03.576740 | orchestrator | 2026-02-09 03:18:03.576750 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-09 03:18:03.576761 | orchestrator | Monday 09 February 2026 03:18:02 +0000 (0:00:06.400) 0:05:05.681 ******* 2026-02-09 03:18:03.576771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-09 03:18:03.576789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-09 03:18:09.753393 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:09.753517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-09 03:18:09.753537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-09 03:18:09.753549 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:09.753559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-09 03:18:09.753571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-09 03:18:09.753675 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:09.753685 | orchestrator | 2026-02-09 03:18:09.753693 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-09 03:18:09.753702 | orchestrator | Monday 09 February 2026 03:18:03 +0000 (0:00:00.704) 0:05:06.385 ******* 2026-02-09 03:18:09.753725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753765 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:09.753771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753799 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:09.753805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-09 03:18:09.753832 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:09.753839 | orchestrator | 2026-02-09 03:18:09.753852 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-09 03:18:09.753859 | orchestrator | Monday 09 February 2026 03:18:04 +0000 (0:00:00.975) 0:05:07.360 ******* 2026-02-09 03:18:09.753865 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:18:09.753872 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:18:09.753879 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:18:09.753886 | orchestrator | 2026-02-09 03:18:09.753892 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-09 03:18:09.753899 | orchestrator | Monday 09 February 2026 03:18:06 +0000 (0:00:02.042) 0:05:09.403 ******* 2026-02-09 03:18:09.753906 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:18:09.753912 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:18:09.753919 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:18:09.753925 | orchestrator | 2026-02-09 03:18:09.753932 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-09 03:18:09.753939 | orchestrator | Monday 09 February 2026 03:18:08 +0000 (0:00:01.830) 0:05:11.233 ******* 2026-02-09 03:18:09.753958 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:09.753965 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:09.753980 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:09.753987 | orchestrator | 2026-02-09 03:18:09.753994 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-09 03:18:09.754000 | orchestrator | Monday 09 February 2026 03:18:09 +0000 (0:00:00.667) 0:05:11.901 ******* 2026-02-09 03:18:09.754007 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:09.754061 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:09.754069 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:09.754076 | orchestrator | 2026-02-09 03:18:09.754082 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-09 03:18:09.754089 | orchestrator | Monday 09 February 2026 03:18:09 +0000 (0:00:00.324) 0:05:12.226 ******* 2026-02-09 03:18:09.754096 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:09.754108 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:55.022685 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:55.022777 | orchestrator | 2026-02-09 03:18:55.022788 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-09 03:18:55.022796 | orchestrator | Monday 09 February 2026 03:18:09 +0000 (0:00:00.337) 0:05:12.563 ******* 2026-02-09 03:18:55.022803 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:55.022810 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:55.022816 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:55.022823 | orchestrator | 2026-02-09 03:18:55.022829 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-09 03:18:55.022836 | orchestrator | Monday 09 February 2026 03:18:10 +0000 (0:00:00.327) 0:05:12.891 ******* 2026-02-09 03:18:55.022842 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:55.022849 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:55.022855 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:55.022861 | orchestrator | 2026-02-09 03:18:55.022867 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-09 03:18:55.022887 | orchestrator | Monday 09 February 2026 03:18:10 +0000 (0:00:00.663) 0:05:13.555 ******* 2026-02-09 03:18:55.022894 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:55.022900 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:55.022906 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:55.022912 | orchestrator | 2026-02-09 03:18:55.022919 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-09 03:18:55.022925 | orchestrator | Monday 09 February 2026 03:18:11 +0000 (0:00:00.576) 0:05:14.132 ******* 2026-02-09 03:18:55.022931 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:18:55.022938 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:18:55.022945 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:18:55.022951 | orchestrator | 2026-02-09 03:18:55.022957 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-09 03:18:55.022982 | orchestrator | Monday 09 February 2026 03:18:11 +0000 (0:00:00.684) 0:05:14.817 ******* 2026-02-09 03:18:55.022989 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:18:55.022995 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:18:55.023001 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:18:55.023007 | orchestrator | 2026-02-09 03:18:55.023013 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-09 03:18:55.023019 | orchestrator | Monday 09 February 2026 03:18:12 +0000 (0:00:00.717) 0:05:15.534 ******* 2026-02-09 03:18:55.023026 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:18:55.023032 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:18:55.023038 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:18:55.023044 | orchestrator | 2026-02-09 03:18:55.023050 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-09 03:18:55.023056 | orchestrator | Monday 09 February 2026 03:18:13 +0000 (0:00:00.821) 0:05:16.356 ******* 2026-02-09 03:18:55.023063 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:18:55.023069 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:18:55.023075 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:18:55.023081 | orchestrator | 2026-02-09 03:18:55.023087 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-09 03:18:55.023093 | orchestrator | Monday 09 February 2026 03:18:14 +0000 (0:00:00.829) 0:05:17.185 ******* 2026-02-09 03:18:55.023100 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:18:55.023106 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:18:55.023112 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:18:55.023118 | orchestrator | 2026-02-09 03:18:55.023124 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-09 03:18:55.023130 | orchestrator | Monday 09 February 2026 03:18:15 +0000 (0:00:00.809) 0:05:17.994 ******* 2026-02-09 03:18:55.023137 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:18:55.023143 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:18:55.023149 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:18:55.023155 | orchestrator | 2026-02-09 03:18:55.023161 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-09 03:18:55.023167 | orchestrator | Monday 09 February 2026 03:18:25 +0000 (0:00:09.892) 0:05:27.886 ******* 2026-02-09 03:18:55.023174 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:18:55.023180 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:18:55.023186 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:18:55.023193 | orchestrator | 2026-02-09 03:18:55.023199 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-09 03:18:55.023205 | orchestrator | Monday 09 February 2026 03:18:25 +0000 (0:00:00.743) 0:05:28.630 ******* 2026-02-09 03:18:55.023213 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:18:55.023220 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:18:55.023228 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:18:55.023235 | orchestrator | 2026-02-09 03:18:55.023246 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-09 03:18:55.023258 | orchestrator | Monday 09 February 2026 03:18:36 +0000 (0:00:10.819) 0:05:39.450 ******* 2026-02-09 03:18:55.023266 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:18:55.023278 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:18:55.023286 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:18:55.023294 | orchestrator | 2026-02-09 03:18:55.023301 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-09 03:18:55.023308 | orchestrator | Monday 09 February 2026 03:18:41 +0000 (0:00:04.742) 0:05:44.192 ******* 2026-02-09 03:18:55.023315 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:18:55.023322 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:18:55.023330 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:18:55.023337 | orchestrator | 2026-02-09 03:18:55.023345 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-09 03:18:55.023374 | orchestrator | Monday 09 February 2026 03:18:45 +0000 (0:00:04.289) 0:05:48.482 ******* 2026-02-09 03:18:55.023390 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:55.023398 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:55.023405 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:55.023413 | orchestrator | 2026-02-09 03:18:55.023420 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-09 03:18:55.023428 | orchestrator | Monday 09 February 2026 03:18:46 +0000 (0:00:00.747) 0:05:49.229 ******* 2026-02-09 03:18:55.023436 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:55.023443 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:55.023450 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:55.023457 | orchestrator | 2026-02-09 03:18:55.023477 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-09 03:18:55.023486 | orchestrator | Monday 09 February 2026 03:18:46 +0000 (0:00:00.360) 0:05:49.589 ******* 2026-02-09 03:18:55.023506 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:55.023513 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:55.023519 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:55.023526 | orchestrator | 2026-02-09 03:18:55.023532 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-09 03:18:55.023538 | orchestrator | Monday 09 February 2026 03:18:47 +0000 (0:00:00.377) 0:05:49.966 ******* 2026-02-09 03:18:55.023544 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:55.023554 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:55.023565 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:55.023571 | orchestrator | 2026-02-09 03:18:55.023578 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-09 03:18:55.023584 | orchestrator | Monday 09 February 2026 03:18:47 +0000 (0:00:00.381) 0:05:50.348 ******* 2026-02-09 03:18:55.023590 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:55.023601 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:55.023610 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:55.023621 | orchestrator | 2026-02-09 03:18:55.023631 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-09 03:18:55.023641 | orchestrator | Monday 09 February 2026 03:18:48 +0000 (0:00:00.660) 0:05:51.009 ******* 2026-02-09 03:18:55.023652 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:18:55.023662 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:18:55.023672 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:18:55.023681 | orchestrator | 2026-02-09 03:18:55.023691 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-09 03:18:55.023702 | orchestrator | Monday 09 February 2026 03:18:48 +0000 (0:00:00.353) 0:05:51.362 ******* 2026-02-09 03:18:55.023711 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:18:55.023722 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:18:55.023732 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:18:55.023742 | orchestrator | 2026-02-09 03:18:55.023753 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-09 03:18:55.023762 | orchestrator | Monday 09 February 2026 03:18:53 +0000 (0:00:04.778) 0:05:56.141 ******* 2026-02-09 03:18:55.023773 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:18:55.023783 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:18:55.023796 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:18:55.023807 | orchestrator | 2026-02-09 03:18:55.023819 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:18:55.023848 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-09 03:18:55.023860 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-09 03:18:55.023871 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-09 03:18:55.023881 | orchestrator | 2026-02-09 03:18:55.023901 | orchestrator | 2026-02-09 03:18:55.023912 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:18:55.023922 | orchestrator | Monday 09 February 2026 03:18:54 +0000 (0:00:00.794) 0:05:56.935 ******* 2026-02-09 03:18:55.023932 | orchestrator | =============================================================================== 2026-02-09 03:18:55.023940 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.82s 2026-02-09 03:18:55.023946 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.89s 2026-02-09 03:18:55.023952 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.40s 2026-02-09 03:18:55.023959 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.56s 2026-02-09 03:18:55.023965 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.78s 2026-02-09 03:18:55.023971 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.74s 2026-02-09 03:18:55.023977 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.48s 2026-02-09 03:18:55.023983 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.38s 2026-02-09 03:18:55.023989 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.29s 2026-02-09 03:18:55.023995 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.21s 2026-02-09 03:18:55.024002 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.19s 2026-02-09 03:18:55.024008 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.11s 2026-02-09 03:18:55.024014 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.62s 2026-02-09 03:18:55.024020 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.59s 2026-02-09 03:18:55.024026 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.57s 2026-02-09 03:18:55.024032 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.54s 2026-02-09 03:18:55.024038 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.44s 2026-02-09 03:18:55.024045 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.41s 2026-02-09 03:18:55.024051 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.37s 2026-02-09 03:18:55.024057 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.36s 2026-02-09 03:18:57.570119 | orchestrator | 2026-02-09 03:18:57 | INFO  | Task 1c79789f-4154-4ae7-94ba-cb5e8ff7c3c2 (opensearch) was prepared for execution. 2026-02-09 03:18:57.570215 | orchestrator | 2026-02-09 03:18:57 | INFO  | It takes a moment until task 1c79789f-4154-4ae7-94ba-cb5e8ff7c3c2 (opensearch) has been started and output is visible here. 2026-02-09 03:19:08.384228 | orchestrator | 2026-02-09 03:19:08.384313 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 03:19:08.384324 | orchestrator | 2026-02-09 03:19:08.384330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 03:19:08.384336 | orchestrator | Monday 09 February 2026 03:19:01 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-02-09 03:19:08.384342 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:19:08.384349 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:19:08.384355 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:19:08.384436 | orchestrator | 2026-02-09 03:19:08.384443 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 03:19:08.384449 | orchestrator | Monday 09 February 2026 03:19:02 +0000 (0:00:00.327) 0:00:00.598 ******* 2026-02-09 03:19:08.384473 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-09 03:19:08.384483 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-09 03:19:08.384493 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-09 03:19:08.384502 | orchestrator | 2026-02-09 03:19:08.384510 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-09 03:19:08.384543 | orchestrator | 2026-02-09 03:19:08.384552 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-09 03:19:08.384560 | orchestrator | Monday 09 February 2026 03:19:02 +0000 (0:00:00.437) 0:00:01.035 ******* 2026-02-09 03:19:08.384569 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:19:08.384577 | orchestrator | 2026-02-09 03:19:08.384585 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-09 03:19:08.384594 | orchestrator | Monday 09 February 2026 03:19:03 +0000 (0:00:00.496) 0:00:01.531 ******* 2026-02-09 03:19:08.384603 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-09 03:19:08.384612 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-09 03:19:08.384621 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-09 03:19:08.384629 | orchestrator | 2026-02-09 03:19:08.384638 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-09 03:19:08.384646 | orchestrator | Monday 09 February 2026 03:19:03 +0000 (0:00:00.646) 0:00:02.178 ******* 2026-02-09 03:19:08.384659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:19:08.384672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:19:08.384701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:19:08.384721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:19:08.384740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:19:08.384750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:19:08.384760 | orchestrator | 2026-02-09 03:19:08.384768 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-09 03:19:08.384777 | orchestrator | Monday 09 February 2026 03:19:05 +0000 (0:00:01.719) 0:00:03.898 ******* 2026-02-09 03:19:08.384786 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:19:08.384796 | orchestrator | 2026-02-09 03:19:08.384805 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-09 03:19:08.384814 | orchestrator | Monday 09 February 2026 03:19:06 +0000 (0:00:00.520) 0:00:04.418 ******* 2026-02-09 03:19:08.384837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:19:09.195254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:19:09.195348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:19:09.195426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:19:09.195442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:19:09.195510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:19:09.195524 | orchestrator | 2026-02-09 03:19:09.195536 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-09 03:19:09.195547 | orchestrator | Monday 09 February 2026 03:19:08 +0000 (0:00:02.314) 0:00:06.733 ******* 2026-02-09 03:19:09.195558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-09 03:19:09.195569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-09 03:19:09.195580 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:19:09.195592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-09 03:19:09.195624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-09 03:19:10.275790 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:19:10.275874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-09 03:19:10.275887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-09 03:19:10.275895 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:19:10.275902 | orchestrator | 2026-02-09 03:19:10.275909 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-09 03:19:10.275917 | orchestrator | Monday 09 February 2026 03:19:09 +0000 (0:00:00.815) 0:00:07.549 ******* 2026-02-09 03:19:10.275944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-09 03:19:10.275965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-09 03:19:10.275986 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:19:10.275993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-09 03:19:10.276000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-09 03:19:10.276006 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:19:10.276017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-09 03:19:10.276028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-09 03:19:10.276035 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:19:10.276042 | orchestrator | 2026-02-09 03:19:10.276048 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-09 03:19:10.276058 | orchestrator | Monday 09 February 2026 03:19:10 +0000 (0:00:01.074) 0:00:08.624 ******* 2026-02-09 03:19:18.127861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:19:18.127962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:19:18.127977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:19:18.128025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:19:18.128057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:19:18.128070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:19:18.128087 | orchestrator | 2026-02-09 03:19:18.128096 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-09 03:19:18.128106 | orchestrator | Monday 09 February 2026 03:19:12 +0000 (0:00:02.228) 0:00:10.852 ******* 2026-02-09 03:19:18.128114 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:19:18.128123 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:19:18.128131 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:19:18.128139 | orchestrator | 2026-02-09 03:19:18.128148 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-09 03:19:18.128157 | orchestrator | Monday 09 February 2026 03:19:14 +0000 (0:00:02.241) 0:00:13.094 ******* 2026-02-09 03:19:18.128166 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:19:18.128175 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:19:18.128184 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:19:18.128192 | orchestrator | 2026-02-09 03:19:18.128201 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-09 03:19:18.128209 | orchestrator | Monday 09 February 2026 03:19:16 +0000 (0:00:01.797) 0:00:14.891 ******* 2026-02-09 03:19:18.128217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:19:18.128230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:19:18.128247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-09 03:22:04.264718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:22:04.264853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:22:04.264881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-09 03:22:04.264891 | orchestrator | 2026-02-09 03:22:04.264901 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-09 03:22:04.264909 | orchestrator | Monday 09 February 2026 03:19:18 +0000 (0:00:01.586) 0:00:16.478 ******* 2026-02-09 03:22:04.264917 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:22:04.264925 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:22:04.264933 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:22:04.264940 | orchestrator | 2026-02-09 03:22:04.264949 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-09 03:22:04.264956 | orchestrator | Monday 09 February 2026 03:19:18 +0000 (0:00:00.301) 0:00:16.779 ******* 2026-02-09 03:22:04.264964 | orchestrator | 2026-02-09 03:22:04.264971 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-09 03:22:04.264978 | orchestrator | Monday 09 February 2026 03:19:18 +0000 (0:00:00.061) 0:00:16.841 ******* 2026-02-09 03:22:04.264985 | orchestrator | 2026-02-09 03:22:04.264993 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-09 03:22:04.265007 | orchestrator | Monday 09 February 2026 03:19:18 +0000 (0:00:00.066) 0:00:16.907 ******* 2026-02-09 03:22:04.265014 | orchestrator | 2026-02-09 03:22:04.265022 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-09 03:22:04.265044 | orchestrator | Monday 09 February 2026 03:19:18 +0000 (0:00:00.063) 0:00:16.970 ******* 2026-02-09 03:22:04.265052 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:22:04.265059 | orchestrator | 2026-02-09 03:22:04.265066 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-09 03:22:04.265074 | orchestrator | Monday 09 February 2026 03:19:18 +0000 (0:00:00.249) 0:00:17.220 ******* 2026-02-09 03:22:04.265081 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:22:04.265088 | orchestrator | 2026-02-09 03:22:04.265095 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-09 03:22:04.265106 | orchestrator | Monday 09 February 2026 03:19:19 +0000 (0:00:00.668) 0:00:17.889 ******* 2026-02-09 03:22:04.265119 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:22:04.265130 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:22:04.265142 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:22:04.265153 | orchestrator | 2026-02-09 03:22:04.265164 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-09 03:22:04.265176 | orchestrator | Monday 09 February 2026 03:20:31 +0000 (0:01:12.195) 0:01:30.084 ******* 2026-02-09 03:22:04.265187 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:22:04.265199 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:22:04.265209 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:22:04.265220 | orchestrator | 2026-02-09 03:22:04.265231 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-09 03:22:04.265243 | orchestrator | Monday 09 February 2026 03:21:54 +0000 (0:01:22.287) 0:02:52.372 ******* 2026-02-09 03:22:04.265331 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:22:04.265349 | orchestrator | 2026-02-09 03:22:04.265361 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-09 03:22:04.265374 | orchestrator | Monday 09 February 2026 03:21:54 +0000 (0:00:00.489) 0:02:52.861 ******* 2026-02-09 03:22:04.265386 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:22:04.265399 | orchestrator | 2026-02-09 03:22:04.265412 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-09 03:22:04.265423 | orchestrator | Monday 09 February 2026 03:21:57 +0000 (0:00:02.732) 0:02:55.594 ******* 2026-02-09 03:22:04.265436 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:22:04.265447 | orchestrator | 2026-02-09 03:22:04.265460 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-09 03:22:04.265473 | orchestrator | Monday 09 February 2026 03:21:59 +0000 (0:00:02.110) 0:02:57.704 ******* 2026-02-09 03:22:04.265486 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:22:04.265500 | orchestrator | 2026-02-09 03:22:04.265512 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-09 03:22:04.265524 | orchestrator | Monday 09 February 2026 03:22:01 +0000 (0:00:02.509) 0:03:00.214 ******* 2026-02-09 03:22:04.265533 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:22:04.265541 | orchestrator | 2026-02-09 03:22:04.265550 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:22:04.265560 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 03:22:04.265570 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 03:22:04.265587 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 03:22:04.265596 | orchestrator | 2026-02-09 03:22:04.265606 | orchestrator | 2026-02-09 03:22:04.265623 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:22:04.265632 | orchestrator | Monday 09 February 2026 03:22:04 +0000 (0:00:02.389) 0:03:02.604 ******* 2026-02-09 03:22:04.265639 | orchestrator | =============================================================================== 2026-02-09 03:22:04.265646 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 82.29s 2026-02-09 03:22:04.265657 | orchestrator | opensearch : Restart opensearch container ------------------------------ 72.20s 2026-02-09 03:22:04.265669 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.73s 2026-02-09 03:22:04.265679 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.51s 2026-02-09 03:22:04.265690 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.39s 2026-02-09 03:22:04.265702 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.31s 2026-02-09 03:22:04.265713 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.24s 2026-02-09 03:22:04.265725 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.23s 2026-02-09 03:22:04.265737 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.11s 2026-02-09 03:22:04.265749 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.80s 2026-02-09 03:22:04.265760 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.72s 2026-02-09 03:22:04.265771 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.59s 2026-02-09 03:22:04.265782 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.07s 2026-02-09 03:22:04.265794 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.82s 2026-02-09 03:22:04.265805 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.67s 2026-02-09 03:22:04.265816 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.65s 2026-02-09 03:22:04.265842 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-02-09 03:22:04.649901 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-02-09 03:22:04.650162 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-02-09 03:22:04.650196 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2026-02-09 03:22:07.151991 | orchestrator | 2026-02-09 03:22:07 | INFO  | Task 10a64b32-814d-483c-9fa9-103ba59728bf (memcached) was prepared for execution. 2026-02-09 03:22:07.152075 | orchestrator | 2026-02-09 03:22:07 | INFO  | It takes a moment until task 10a64b32-814d-483c-9fa9-103ba59728bf (memcached) has been started and output is visible here. 2026-02-09 03:22:23.846992 | orchestrator | 2026-02-09 03:22:23.847129 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 03:22:23.847165 | orchestrator | 2026-02-09 03:22:23.847177 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 03:22:23.847189 | orchestrator | Monday 09 February 2026 03:22:11 +0000 (0:00:00.283) 0:00:00.283 ******* 2026-02-09 03:22:23.847250 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:22:23.847267 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:22:23.847284 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:22:23.847299 | orchestrator | 2026-02-09 03:22:23.847314 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 03:22:23.847330 | orchestrator | Monday 09 February 2026 03:22:11 +0000 (0:00:00.297) 0:00:00.580 ******* 2026-02-09 03:22:23.847348 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-09 03:22:23.847363 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-09 03:22:23.847378 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-09 03:22:23.847394 | orchestrator | 2026-02-09 03:22:23.847411 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-09 03:22:23.847465 | orchestrator | 2026-02-09 03:22:23.847486 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-09 03:22:23.847502 | orchestrator | Monday 09 February 2026 03:22:12 +0000 (0:00:00.497) 0:00:01.078 ******* 2026-02-09 03:22:23.847519 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:22:23.847538 | orchestrator | 2026-02-09 03:22:23.847555 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-09 03:22:23.847573 | orchestrator | Monday 09 February 2026 03:22:12 +0000 (0:00:00.530) 0:00:01.609 ******* 2026-02-09 03:22:23.847590 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-09 03:22:23.847606 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-09 03:22:23.847624 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-09 03:22:23.847640 | orchestrator | 2026-02-09 03:22:23.847659 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-09 03:22:23.847676 | orchestrator | Monday 09 February 2026 03:22:13 +0000 (0:00:00.664) 0:00:02.273 ******* 2026-02-09 03:22:23.847695 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-09 03:22:23.847714 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-09 03:22:23.847734 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-09 03:22:23.847750 | orchestrator | 2026-02-09 03:22:23.847767 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-09 03:22:23.847784 | orchestrator | Monday 09 February 2026 03:22:15 +0000 (0:00:01.854) 0:00:04.128 ******* 2026-02-09 03:22:23.847821 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:22:23.847839 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:22:23.847855 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:22:23.847872 | orchestrator | 2026-02-09 03:22:23.847889 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-09 03:22:23.847905 | orchestrator | Monday 09 February 2026 03:22:16 +0000 (0:00:01.601) 0:00:05.730 ******* 2026-02-09 03:22:23.847923 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:22:23.847939 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:22:23.847955 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:22:23.847972 | orchestrator | 2026-02-09 03:22:23.847990 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:22:23.848007 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:22:23.848026 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:22:23.848043 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:22:23.848061 | orchestrator | 2026-02-09 03:22:23.848077 | orchestrator | 2026-02-09 03:22:23.848093 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:22:23.848111 | orchestrator | Monday 09 February 2026 03:22:23 +0000 (0:00:06.381) 0:00:12.111 ******* 2026-02-09 03:22:23.848127 | orchestrator | =============================================================================== 2026-02-09 03:22:23.848144 | orchestrator | memcached : Restart memcached container --------------------------------- 6.38s 2026-02-09 03:22:23.848160 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.85s 2026-02-09 03:22:23.848177 | orchestrator | memcached : Check memcached container ----------------------------------- 1.60s 2026-02-09 03:22:23.848220 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.66s 2026-02-09 03:22:23.848239 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.53s 2026-02-09 03:22:23.848256 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2026-02-09 03:22:23.848272 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-02-09 03:22:26.453017 | orchestrator | 2026-02-09 03:22:26 | INFO  | Task 7953f085-d3fd-47fb-bcf1-75a0c27d9f57 (redis) was prepared for execution. 2026-02-09 03:22:26.453287 | orchestrator | 2026-02-09 03:22:26 | INFO  | It takes a moment until task 7953f085-d3fd-47fb-bcf1-75a0c27d9f57 (redis) has been started and output is visible here. 2026-02-09 03:22:35.712089 | orchestrator | 2026-02-09 03:22:35.712224 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 03:22:35.712242 | orchestrator | 2026-02-09 03:22:35.712253 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 03:22:35.712264 | orchestrator | Monday 09 February 2026 03:22:30 +0000 (0:00:00.258) 0:00:00.258 ******* 2026-02-09 03:22:35.712274 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:22:35.712285 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:22:35.712295 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:22:35.712304 | orchestrator | 2026-02-09 03:22:35.712314 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 03:22:35.712324 | orchestrator | Monday 09 February 2026 03:22:31 +0000 (0:00:00.307) 0:00:00.565 ******* 2026-02-09 03:22:35.712334 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-09 03:22:35.712344 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-09 03:22:35.712354 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-09 03:22:35.712364 | orchestrator | 2026-02-09 03:22:35.712373 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-09 03:22:35.712383 | orchestrator | 2026-02-09 03:22:35.712393 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-09 03:22:35.712403 | orchestrator | Monday 09 February 2026 03:22:31 +0000 (0:00:00.457) 0:00:01.023 ******* 2026-02-09 03:22:35.712413 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:22:35.712423 | orchestrator | 2026-02-09 03:22:35.712433 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-09 03:22:35.712443 | orchestrator | Monday 09 February 2026 03:22:32 +0000 (0:00:00.561) 0:00:01.585 ******* 2026-02-09 03:22:35.712457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:35.712473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:35.712484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:35.712519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:35.712549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:35.712561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:35.712571 | orchestrator | 2026-02-09 03:22:35.712581 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-09 03:22:35.712591 | orchestrator | Monday 09 February 2026 03:22:33 +0000 (0:00:01.101) 0:00:02.686 ******* 2026-02-09 03:22:35.712601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:35.712703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:35.712724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:35.712743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:35.712764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.813705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.813801 | orchestrator | 2026-02-09 03:22:39.813835 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-09 03:22:39.813847 | orchestrator | Monday 09 February 2026 03:22:35 +0000 (0:00:02.499) 0:00:05.186 ******* 2026-02-09 03:22:39.813859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.813886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.813895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.813925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.813935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.813963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.813973 | orchestrator | 2026-02-09 03:22:39.813982 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-09 03:22:39.813991 | orchestrator | Monday 09 February 2026 03:22:38 +0000 (0:00:02.447) 0:00:07.633 ******* 2026-02-09 03:22:39.813999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.814008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.814075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.814095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.814104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:39.814123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 03:22:54.879592 | orchestrator | 2026-02-09 03:22:54.879698 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-09 03:22:54.879716 | orchestrator | Monday 09 February 2026 03:22:39 +0000 (0:00:01.419) 0:00:09.052 ******* 2026-02-09 03:22:54.879727 | orchestrator | 2026-02-09 03:22:54.879739 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-09 03:22:54.879750 | orchestrator | Monday 09 February 2026 03:22:39 +0000 (0:00:00.077) 0:00:09.130 ******* 2026-02-09 03:22:54.879761 | orchestrator | 2026-02-09 03:22:54.879772 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-09 03:22:54.879783 | orchestrator | Monday 09 February 2026 03:22:39 +0000 (0:00:00.063) 0:00:09.193 ******* 2026-02-09 03:22:54.879794 | orchestrator | 2026-02-09 03:22:54.879804 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-09 03:22:54.879815 | orchestrator | Monday 09 February 2026 03:22:39 +0000 (0:00:00.091) 0:00:09.285 ******* 2026-02-09 03:22:54.879826 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:22:54.879838 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:22:54.879849 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:22:54.879860 | orchestrator | 2026-02-09 03:22:54.879871 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-09 03:22:54.879882 | orchestrator | Monday 09 February 2026 03:22:46 +0000 (0:00:06.705) 0:00:15.991 ******* 2026-02-09 03:22:54.879923 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:22:54.879934 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:22:54.879945 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:22:54.879956 | orchestrator | 2026-02-09 03:22:54.879967 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:22:54.879979 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:22:54.879991 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:22:54.880017 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:22:54.880028 | orchestrator | 2026-02-09 03:22:54.880039 | orchestrator | 2026-02-09 03:22:54.880050 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:22:54.880061 | orchestrator | Monday 09 February 2026 03:22:54 +0000 (0:00:07.977) 0:00:23.969 ******* 2026-02-09 03:22:54.880072 | orchestrator | =============================================================================== 2026-02-09 03:22:54.880083 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.98s 2026-02-09 03:22:54.880094 | orchestrator | redis : Restart redis container ----------------------------------------- 6.71s 2026-02-09 03:22:54.880133 | orchestrator | redis : Copying over default config.json files -------------------------- 2.50s 2026-02-09 03:22:54.880146 | orchestrator | redis : Copying over redis config files --------------------------------- 2.45s 2026-02-09 03:22:54.880159 | orchestrator | redis : Check redis containers ------------------------------------------ 1.42s 2026-02-09 03:22:54.880172 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.10s 2026-02-09 03:22:54.880184 | orchestrator | redis : include_tasks --------------------------------------------------- 0.56s 2026-02-09 03:22:54.880195 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-02-09 03:22:54.880205 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-02-09 03:22:54.880216 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.23s 2026-02-09 03:22:57.476762 | orchestrator | 2026-02-09 03:22:57 | INFO  | Task 9e17ec82-4910-4d31-af94-fc2a955dd450 (mariadb) was prepared for execution. 2026-02-09 03:22:57.476847 | orchestrator | 2026-02-09 03:22:57 | INFO  | It takes a moment until task 9e17ec82-4910-4d31-af94-fc2a955dd450 (mariadb) has been started and output is visible here. 2026-02-09 03:23:11.194761 | orchestrator | 2026-02-09 03:23:11.194837 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 03:23:11.194845 | orchestrator | 2026-02-09 03:23:11.194850 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 03:23:11.194856 | orchestrator | Monday 09 February 2026 03:23:01 +0000 (0:00:00.170) 0:00:00.170 ******* 2026-02-09 03:23:11.194860 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:23:11.194866 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:23:11.194873 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:23:11.194880 | orchestrator | 2026-02-09 03:23:11.194888 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 03:23:11.194896 | orchestrator | Monday 09 February 2026 03:23:02 +0000 (0:00:00.342) 0:00:00.512 ******* 2026-02-09 03:23:11.194904 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-09 03:23:11.194912 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-09 03:23:11.194920 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-09 03:23:11.194927 | orchestrator | 2026-02-09 03:23:11.194934 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-09 03:23:11.194942 | orchestrator | 2026-02-09 03:23:11.194949 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-09 03:23:11.194976 | orchestrator | Monday 09 February 2026 03:23:02 +0000 (0:00:00.557) 0:00:01.070 ******* 2026-02-09 03:23:11.194984 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 03:23:11.194991 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-09 03:23:11.194999 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-09 03:23:11.195004 | orchestrator | 2026-02-09 03:23:11.195009 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-09 03:23:11.195013 | orchestrator | Monday 09 February 2026 03:23:03 +0000 (0:00:00.366) 0:00:01.436 ******* 2026-02-09 03:23:11.195018 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:23:11.195023 | orchestrator | 2026-02-09 03:23:11.195028 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-09 03:23:11.195032 | orchestrator | Monday 09 February 2026 03:23:03 +0000 (0:00:00.524) 0:00:01.960 ******* 2026-02-09 03:23:11.195091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 03:23:11.195114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 03:23:11.195128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 03:23:11.195133 | orchestrator | 2026-02-09 03:23:11.195138 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-09 03:23:11.195142 | orchestrator | Monday 09 February 2026 03:23:06 +0000 (0:00:02.539) 0:00:04.500 ******* 2026-02-09 03:23:11.195147 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:23:11.195152 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:23:11.195157 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:23:11.195161 | orchestrator | 2026-02-09 03:23:11.195165 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-09 03:23:11.195170 | orchestrator | Monday 09 February 2026 03:23:06 +0000 (0:00:00.637) 0:00:05.137 ******* 2026-02-09 03:23:11.195174 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:23:11.195178 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:23:11.195183 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:23:11.195187 | orchestrator | 2026-02-09 03:23:11.195191 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-09 03:23:11.195196 | orchestrator | Monday 09 February 2026 03:23:08 +0000 (0:00:01.390) 0:00:06.528 ******* 2026-02-09 03:23:11.195205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 03:23:19.084937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 03:23:19.085009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 03:23:19.085083 | orchestrator | 2026-02-09 03:23:19.085090 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-09 03:23:19.085096 | orchestrator | Monday 09 February 2026 03:23:11 +0000 (0:00:03.004) 0:00:09.533 ******* 2026-02-09 03:23:19.085100 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:23:19.085105 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:23:19.085109 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:23:19.085113 | orchestrator | 2026-02-09 03:23:19.085117 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-09 03:23:19.085131 | orchestrator | Monday 09 February 2026 03:23:12 +0000 (0:00:01.113) 0:00:10.647 ******* 2026-02-09 03:23:19.085135 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:23:19.085139 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:23:19.085142 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:23:19.085147 | orchestrator | 2026-02-09 03:23:19.085150 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-09 03:23:19.085154 | orchestrator | Monday 09 February 2026 03:23:16 +0000 (0:00:03.738) 0:00:14.385 ******* 2026-02-09 03:23:19.085158 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:23:19.085163 | orchestrator | 2026-02-09 03:23:19.085167 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-09 03:23:19.085170 | orchestrator | Monday 09 February 2026 03:23:16 +0000 (0:00:00.591) 0:00:14.976 ******* 2026-02-09 03:23:19.085179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:23:19.085188 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:23:19.085195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:23:24.133183 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:23:24.133256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:23:24.133278 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:23:24.133283 | orchestrator | 2026-02-09 03:23:24.133289 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-09 03:23:24.133294 | orchestrator | Monday 09 February 2026 03:23:19 +0000 (0:00:02.446) 0:00:17.423 ******* 2026-02-09 03:23:24.133300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:23:24.133305 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:23:24.133324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:23:24.133334 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:23:24.133339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:23:24.133344 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:23:24.133350 | orchestrator | 2026-02-09 03:23:24.133358 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-09 03:23:24.133365 | orchestrator | Monday 09 February 2026 03:23:21 +0000 (0:00:02.760) 0:00:20.184 ******* 2026-02-09 03:23:24.133382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:23:26.886487 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:23:26.886638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:23:26.886679 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:23:26.886717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 03:23:26.886764 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:23:26.886782 | orchestrator | 2026-02-09 03:23:26.886798 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-09 03:23:26.886820 | orchestrator | Monday 09 February 2026 03:23:24 +0000 (0:00:02.293) 0:00:22.477 ******* 2026-02-09 03:23:26.886868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 03:23:26.886888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 03:23:26.886924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 03:25:41.642928 | orchestrator | 2026-02-09 03:25:41.643013 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-09 03:25:41.643021 | orchestrator | Monday 09 February 2026 03:23:26 +0000 (0:00:02.752) 0:00:25.229 ******* 2026-02-09 03:25:41.643027 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:25:41.643033 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:25:41.643038 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:25:41.643043 | orchestrator | 2026-02-09 03:25:41.643049 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-09 03:25:41.643054 | orchestrator | Monday 09 February 2026 03:23:27 +0000 (0:00:00.796) 0:00:26.026 ******* 2026-02-09 03:25:41.643059 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:25:41.643065 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:25:41.643070 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:25:41.643075 | orchestrator | 2026-02-09 03:25:41.643080 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-09 03:25:41.643084 | orchestrator | Monday 09 February 2026 03:23:28 +0000 (0:00:00.604) 0:00:26.631 ******* 2026-02-09 03:25:41.643089 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:25:41.643094 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:25:41.643099 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:25:41.643104 | orchestrator | 2026-02-09 03:25:41.643109 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-09 03:25:41.643114 | orchestrator | Monday 09 February 2026 03:23:28 +0000 (0:00:00.331) 0:00:26.963 ******* 2026-02-09 03:25:41.643120 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-09 03:25:41.643127 | orchestrator | ...ignoring 2026-02-09 03:25:41.643132 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-09 03:25:41.643137 | orchestrator | ...ignoring 2026-02-09 03:25:41.643142 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-09 03:25:41.643146 | orchestrator | ...ignoring 2026-02-09 03:25:41.643168 | orchestrator | 2026-02-09 03:25:41.643173 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-09 03:25:41.643178 | orchestrator | Monday 09 February 2026 03:23:39 +0000 (0:00:10.876) 0:00:37.840 ******* 2026-02-09 03:25:41.643183 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:25:41.643188 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:25:41.643192 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:25:41.643197 | orchestrator | 2026-02-09 03:25:41.643202 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-09 03:25:41.643207 | orchestrator | Monday 09 February 2026 03:23:40 +0000 (0:00:00.577) 0:00:38.417 ******* 2026-02-09 03:25:41.643212 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:25:41.643217 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:41.643222 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:41.643226 | orchestrator | 2026-02-09 03:25:41.643231 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-09 03:25:41.643236 | orchestrator | Monday 09 February 2026 03:23:40 +0000 (0:00:00.670) 0:00:39.087 ******* 2026-02-09 03:25:41.643241 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:25:41.643246 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:41.643251 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:41.643256 | orchestrator | 2026-02-09 03:25:41.643272 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-09 03:25:41.643278 | orchestrator | Monday 09 February 2026 03:23:41 +0000 (0:00:00.416) 0:00:39.503 ******* 2026-02-09 03:25:41.643283 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:25:41.643288 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:41.643293 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:41.643298 | orchestrator | 2026-02-09 03:25:41.643303 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-09 03:25:41.643308 | orchestrator | Monday 09 February 2026 03:23:41 +0000 (0:00:00.440) 0:00:39.944 ******* 2026-02-09 03:25:41.643314 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:25:41.643319 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:25:41.643324 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:25:41.643329 | orchestrator | 2026-02-09 03:25:41.643334 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-09 03:25:41.643340 | orchestrator | Monday 09 February 2026 03:23:42 +0000 (0:00:00.433) 0:00:40.377 ******* 2026-02-09 03:25:41.643346 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:25:41.643351 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:41.643356 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:41.643361 | orchestrator | 2026-02-09 03:25:41.643366 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-09 03:25:41.643371 | orchestrator | Monday 09 February 2026 03:23:42 +0000 (0:00:00.918) 0:00:41.296 ******* 2026-02-09 03:25:41.643377 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:41.643382 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:41.643387 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-09 03:25:41.643393 | orchestrator | 2026-02-09 03:25:41.643398 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-09 03:25:41.643403 | orchestrator | Monday 09 February 2026 03:23:43 +0000 (0:00:00.411) 0:00:41.707 ******* 2026-02-09 03:25:41.643408 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:25:41.643413 | orchestrator | 2026-02-09 03:25:41.643419 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-09 03:25:41.643424 | orchestrator | Monday 09 February 2026 03:23:53 +0000 (0:00:10.099) 0:00:51.807 ******* 2026-02-09 03:25:41.643429 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:25:41.643434 | orchestrator | 2026-02-09 03:25:41.643440 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-09 03:25:41.643445 | orchestrator | Monday 09 February 2026 03:23:53 +0000 (0:00:00.150) 0:00:51.957 ******* 2026-02-09 03:25:41.643451 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:25:41.643475 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:41.643481 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:41.643488 | orchestrator | 2026-02-09 03:25:41.643494 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-09 03:25:41.643500 | orchestrator | Monday 09 February 2026 03:23:54 +0000 (0:00:01.028) 0:00:52.986 ******* 2026-02-09 03:25:41.643508 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:25:41.643516 | orchestrator | 2026-02-09 03:25:41.643523 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-09 03:25:41.643531 | orchestrator | Monday 09 February 2026 03:24:02 +0000 (0:00:07.809) 0:01:00.796 ******* 2026-02-09 03:25:41.643539 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:25:41.643547 | orchestrator | 2026-02-09 03:25:41.643557 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-09 03:25:41.643568 | orchestrator | Monday 09 February 2026 03:24:04 +0000 (0:00:01.649) 0:01:02.446 ******* 2026-02-09 03:25:41.643577 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:25:41.643584 | orchestrator | 2026-02-09 03:25:41.643593 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-09 03:25:41.643601 | orchestrator | Monday 09 February 2026 03:24:06 +0000 (0:00:02.475) 0:01:04.921 ******* 2026-02-09 03:25:41.643608 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:25:41.643616 | orchestrator | 2026-02-09 03:25:41.643624 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-09 03:25:41.643632 | orchestrator | Monday 09 February 2026 03:24:06 +0000 (0:00:00.129) 0:01:05.051 ******* 2026-02-09 03:25:41.643638 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:25:41.643644 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:41.643649 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:41.643655 | orchestrator | 2026-02-09 03:25:41.643662 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-09 03:25:41.643671 | orchestrator | Monday 09 February 2026 03:24:07 +0000 (0:00:00.358) 0:01:05.409 ******* 2026-02-09 03:25:41.643679 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:25:41.643703 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-09 03:25:41.643712 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:25:41.643720 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:25:41.643727 | orchestrator | 2026-02-09 03:25:41.643734 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-09 03:25:41.643741 | orchestrator | skipping: no hosts matched 2026-02-09 03:25:41.643748 | orchestrator | 2026-02-09 03:25:41.643755 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-09 03:25:41.643762 | orchestrator | 2026-02-09 03:25:41.643769 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-09 03:25:41.643776 | orchestrator | Monday 09 February 2026 03:24:07 +0000 (0:00:00.576) 0:01:05.985 ******* 2026-02-09 03:25:41.643783 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:25:41.643790 | orchestrator | 2026-02-09 03:25:41.643798 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-09 03:25:41.643805 | orchestrator | Monday 09 February 2026 03:24:30 +0000 (0:00:23.028) 0:01:29.014 ******* 2026-02-09 03:25:41.643813 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:25:41.643821 | orchestrator | 2026-02-09 03:25:41.643830 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-09 03:25:41.643839 | orchestrator | Monday 09 February 2026 03:24:42 +0000 (0:00:11.569) 0:01:40.584 ******* 2026-02-09 03:25:41.643848 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:25:41.643855 | orchestrator | 2026-02-09 03:25:41.643866 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-09 03:25:41.643874 | orchestrator | 2026-02-09 03:25:41.643889 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-09 03:25:41.643895 | orchestrator | Monday 09 February 2026 03:24:44 +0000 (0:00:02.466) 0:01:43.050 ******* 2026-02-09 03:25:41.643908 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:25:41.643913 | orchestrator | 2026-02-09 03:25:41.643918 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-09 03:25:41.643923 | orchestrator | Monday 09 February 2026 03:25:02 +0000 (0:00:18.021) 0:02:01.072 ******* 2026-02-09 03:25:41.643928 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:25:41.643933 | orchestrator | 2026-02-09 03:25:41.643938 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-09 03:25:41.643943 | orchestrator | Monday 09 February 2026 03:25:19 +0000 (0:00:16.572) 0:02:17.645 ******* 2026-02-09 03:25:41.643948 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:25:41.643953 | orchestrator | 2026-02-09 03:25:41.643958 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-09 03:25:41.643963 | orchestrator | 2026-02-09 03:25:41.643968 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-09 03:25:41.643974 | orchestrator | Monday 09 February 2026 03:25:21 +0000 (0:00:02.521) 0:02:20.166 ******* 2026-02-09 03:25:41.643979 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:25:41.643984 | orchestrator | 2026-02-09 03:25:41.643988 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-09 03:25:41.643993 | orchestrator | Monday 09 February 2026 03:25:33 +0000 (0:00:11.983) 0:02:32.150 ******* 2026-02-09 03:25:41.643998 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:25:41.644003 | orchestrator | 2026-02-09 03:25:41.644008 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-09 03:25:41.644013 | orchestrator | Monday 09 February 2026 03:25:38 +0000 (0:00:04.571) 0:02:36.721 ******* 2026-02-09 03:25:41.644018 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:25:41.644023 | orchestrator | 2026-02-09 03:25:41.644028 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-09 03:25:41.644033 | orchestrator | 2026-02-09 03:25:41.644038 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-09 03:25:41.644043 | orchestrator | Monday 09 February 2026 03:25:41 +0000 (0:00:02.718) 0:02:39.440 ******* 2026-02-09 03:25:41.644048 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:25:41.644054 | orchestrator | 2026-02-09 03:25:41.644059 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-09 03:25:41.644071 | orchestrator | Monday 09 February 2026 03:25:41 +0000 (0:00:00.541) 0:02:39.982 ******* 2026-02-09 03:25:54.125726 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:54.125831 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:54.125847 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:25:54.125856 | orchestrator | 2026-02-09 03:25:54.125867 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-09 03:25:54.125877 | orchestrator | Monday 09 February 2026 03:25:43 +0000 (0:00:02.195) 0:02:42.177 ******* 2026-02-09 03:25:54.125886 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:54.125894 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:54.125903 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:25:54.125912 | orchestrator | 2026-02-09 03:25:54.125921 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-09 03:25:54.125930 | orchestrator | Monday 09 February 2026 03:25:45 +0000 (0:00:02.013) 0:02:44.191 ******* 2026-02-09 03:25:54.125938 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:54.125947 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:54.125954 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:25:54.125963 | orchestrator | 2026-02-09 03:25:54.125972 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-09 03:25:54.125980 | orchestrator | Monday 09 February 2026 03:25:48 +0000 (0:00:02.306) 0:02:46.497 ******* 2026-02-09 03:25:54.125990 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:54.125996 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:54.126002 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:25:54.126007 | orchestrator | 2026-02-09 03:25:54.126077 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-09 03:25:54.126084 | orchestrator | Monday 09 February 2026 03:25:50 +0000 (0:00:02.074) 0:02:48.572 ******* 2026-02-09 03:25:54.126089 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:25:54.126096 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:25:54.126101 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:25:54.126106 | orchestrator | 2026-02-09 03:25:54.126111 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-09 03:25:54.126120 | orchestrator | Monday 09 February 2026 03:25:53 +0000 (0:00:03.092) 0:02:51.664 ******* 2026-02-09 03:25:54.126127 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:25:54.126136 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:25:54.126145 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:25:54.126153 | orchestrator | 2026-02-09 03:25:54.126163 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:25:54.126170 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-09 03:25:54.126177 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-09 03:25:54.126183 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-09 03:25:54.126188 | orchestrator | 2026-02-09 03:25:54.126193 | orchestrator | 2026-02-09 03:25:54.126198 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:25:54.126203 | orchestrator | Monday 09 February 2026 03:25:53 +0000 (0:00:00.440) 0:02:52.104 ******* 2026-02-09 03:25:54.126208 | orchestrator | =============================================================================== 2026-02-09 03:25:54.126224 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.05s 2026-02-09 03:25:54.126234 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 28.14s 2026-02-09 03:25:54.126243 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.98s 2026-02-09 03:25:54.126252 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.88s 2026-02-09 03:25:54.126261 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.10s 2026-02-09 03:25:54.126270 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.81s 2026-02-09 03:25:54.126279 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.99s 2026-02-09 03:25:54.126288 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.57s 2026-02-09 03:25:54.126297 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.74s 2026-02-09 03:25:54.126306 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.09s 2026-02-09 03:25:54.126316 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.00s 2026-02-09 03:25:54.126325 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.76s 2026-02-09 03:25:54.126335 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.75s 2026-02-09 03:25:54.126341 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.72s 2026-02-09 03:25:54.126346 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.54s 2026-02-09 03:25:54.126351 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.48s 2026-02-09 03:25:54.126356 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.45s 2026-02-09 03:25:54.126361 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.31s 2026-02-09 03:25:54.126367 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.29s 2026-02-09 03:25:54.126372 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.20s 2026-02-09 03:25:56.520332 | orchestrator | 2026-02-09 03:25:56 | INFO  | Task 44910628-beb2-4ca7-a09c-32be8c90f7a6 (rabbitmq) was prepared for execution. 2026-02-09 03:25:56.520431 | orchestrator | 2026-02-09 03:25:56 | INFO  | It takes a moment until task 44910628-beb2-4ca7-a09c-32be8c90f7a6 (rabbitmq) has been started and output is visible here. 2026-02-09 03:26:09.790108 | orchestrator | 2026-02-09 03:26:09.790247 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 03:26:09.790274 | orchestrator | 2026-02-09 03:26:09.790295 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 03:26:09.790315 | orchestrator | Monday 09 February 2026 03:26:00 +0000 (0:00:00.172) 0:00:00.172 ******* 2026-02-09 03:26:09.790329 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:26:09.790341 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:26:09.790351 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:26:09.790362 | orchestrator | 2026-02-09 03:26:09.790374 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 03:26:09.790385 | orchestrator | Monday 09 February 2026 03:26:01 +0000 (0:00:00.322) 0:00:00.495 ******* 2026-02-09 03:26:09.790396 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-09 03:26:09.790407 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-09 03:26:09.790418 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-09 03:26:09.790429 | orchestrator | 2026-02-09 03:26:09.790439 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-09 03:26:09.790451 | orchestrator | 2026-02-09 03:26:09.790464 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-09 03:26:09.790484 | orchestrator | Monday 09 February 2026 03:26:01 +0000 (0:00:00.570) 0:00:01.066 ******* 2026-02-09 03:26:09.790503 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:26:09.790522 | orchestrator | 2026-02-09 03:26:09.790542 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-09 03:26:09.790562 | orchestrator | Monday 09 February 2026 03:26:02 +0000 (0:00:00.531) 0:00:01.597 ******* 2026-02-09 03:26:09.790581 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:26:09.790600 | orchestrator | 2026-02-09 03:26:09.790619 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-09 03:26:09.790693 | orchestrator | Monday 09 February 2026 03:26:03 +0000 (0:00:00.963) 0:00:02.560 ******* 2026-02-09 03:26:09.790708 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:26:09.790722 | orchestrator | 2026-02-09 03:26:09.790735 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-09 03:26:09.790747 | orchestrator | Monday 09 February 2026 03:26:03 +0000 (0:00:00.387) 0:00:02.948 ******* 2026-02-09 03:26:09.790760 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:26:09.790772 | orchestrator | 2026-02-09 03:26:09.790785 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-09 03:26:09.790798 | orchestrator | Monday 09 February 2026 03:26:03 +0000 (0:00:00.381) 0:00:03.330 ******* 2026-02-09 03:26:09.790810 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:26:09.790821 | orchestrator | 2026-02-09 03:26:09.790832 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-09 03:26:09.790843 | orchestrator | Monday 09 February 2026 03:26:04 +0000 (0:00:00.371) 0:00:03.702 ******* 2026-02-09 03:26:09.790853 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:26:09.790864 | orchestrator | 2026-02-09 03:26:09.790875 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-09 03:26:09.790886 | orchestrator | Monday 09 February 2026 03:26:04 +0000 (0:00:00.554) 0:00:04.256 ******* 2026-02-09 03:26:09.790915 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:26:09.790950 | orchestrator | 2026-02-09 03:26:09.790962 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-09 03:26:09.790972 | orchestrator | Monday 09 February 2026 03:26:05 +0000 (0:00:00.847) 0:00:05.104 ******* 2026-02-09 03:26:09.790983 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:26:09.790994 | orchestrator | 2026-02-09 03:26:09.791005 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-09 03:26:09.791015 | orchestrator | Monday 09 February 2026 03:26:06 +0000 (0:00:00.848) 0:00:05.953 ******* 2026-02-09 03:26:09.791026 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:26:09.791037 | orchestrator | 2026-02-09 03:26:09.791047 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-09 03:26:09.791058 | orchestrator | Monday 09 February 2026 03:26:06 +0000 (0:00:00.378) 0:00:06.331 ******* 2026-02-09 03:26:09.791069 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:26:09.791080 | orchestrator | 2026-02-09 03:26:09.791091 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-09 03:26:09.791101 | orchestrator | Monday 09 February 2026 03:26:07 +0000 (0:00:00.392) 0:00:06.723 ******* 2026-02-09 03:26:09.791145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:26:09.791163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:26:09.791176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:26:09.791196 | orchestrator | 2026-02-09 03:26:09.791213 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-09 03:26:09.791225 | orchestrator | Monday 09 February 2026 03:26:08 +0000 (0:00:00.788) 0:00:07.512 ******* 2026-02-09 03:26:09.791237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:26:09.791258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:26:28.175896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:26:28.176014 | orchestrator | 2026-02-09 03:26:28.176034 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-09 03:26:28.176049 | orchestrator | Monday 09 February 2026 03:26:09 +0000 (0:00:01.633) 0:00:09.146 ******* 2026-02-09 03:26:28.176085 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-09 03:26:28.176098 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-09 03:26:28.176109 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-09 03:26:28.176120 | orchestrator | 2026-02-09 03:26:28.176132 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-09 03:26:28.176143 | orchestrator | Monday 09 February 2026 03:26:11 +0000 (0:00:01.537) 0:00:10.683 ******* 2026-02-09 03:26:28.176167 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-09 03:26:28.176179 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-09 03:26:28.176190 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-09 03:26:28.176201 | orchestrator | 2026-02-09 03:26:28.176212 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-09 03:26:28.176223 | orchestrator | Monday 09 February 2026 03:26:12 +0000 (0:00:01.668) 0:00:12.352 ******* 2026-02-09 03:26:28.176233 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-09 03:26:28.176244 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-09 03:26:28.176255 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-09 03:26:28.176266 | orchestrator | 2026-02-09 03:26:28.176276 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-09 03:26:28.176287 | orchestrator | Monday 09 February 2026 03:26:14 +0000 (0:00:01.365) 0:00:13.718 ******* 2026-02-09 03:26:28.176298 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-09 03:26:28.176309 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-09 03:26:28.176320 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-09 03:26:28.176330 | orchestrator | 2026-02-09 03:26:28.176341 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-09 03:26:28.176352 | orchestrator | Monday 09 February 2026 03:26:16 +0000 (0:00:01.750) 0:00:15.469 ******* 2026-02-09 03:26:28.176363 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-09 03:26:28.176376 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-09 03:26:28.176389 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-09 03:26:28.176402 | orchestrator | 2026-02-09 03:26:28.176415 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-09 03:26:28.176429 | orchestrator | Monday 09 February 2026 03:26:17 +0000 (0:00:01.367) 0:00:16.836 ******* 2026-02-09 03:26:28.176442 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-09 03:26:28.176454 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-09 03:26:28.176467 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-09 03:26:28.176479 | orchestrator | 2026-02-09 03:26:28.176492 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-09 03:26:28.176505 | orchestrator | Monday 09 February 2026 03:26:18 +0000 (0:00:01.499) 0:00:18.336 ******* 2026-02-09 03:26:28.176518 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:26:28.176530 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:26:28.176558 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:26:28.176581 | orchestrator | 2026-02-09 03:26:28.176592 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-09 03:26:28.176635 | orchestrator | Monday 09 February 2026 03:26:19 +0000 (0:00:00.509) 0:00:18.845 ******* 2026-02-09 03:26:28.176649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:26:28.176669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:26:28.176682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 03:26:28.176694 | orchestrator | 2026-02-09 03:26:28.176705 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-09 03:26:28.176716 | orchestrator | Monday 09 February 2026 03:26:20 +0000 (0:00:01.194) 0:00:20.040 ******* 2026-02-09 03:26:28.176727 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:26:28.176738 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:26:28.176749 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:26:28.176760 | orchestrator | 2026-02-09 03:26:28.176771 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-09 03:26:28.176789 | orchestrator | Monday 09 February 2026 03:26:21 +0000 (0:00:00.804) 0:00:20.844 ******* 2026-02-09 03:26:28.176800 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:26:28.176811 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:26:28.176822 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:26:28.176832 | orchestrator | 2026-02-09 03:26:28.176843 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-09 03:26:28.176861 | orchestrator | Monday 09 February 2026 03:26:28 +0000 (0:00:06.686) 0:00:27.531 ******* 2026-02-09 03:28:00.227873 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:28:00.227987 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:28:00.228003 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:28:00.228015 | orchestrator | 2026-02-09 03:28:00.228028 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-09 03:28:00.228039 | orchestrator | 2026-02-09 03:28:00.228051 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-09 03:28:00.228062 | orchestrator | Monday 09 February 2026 03:26:28 +0000 (0:00:00.547) 0:00:28.079 ******* 2026-02-09 03:28:00.228073 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:28:00.228085 | orchestrator | 2026-02-09 03:28:00.228096 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-09 03:28:00.228107 | orchestrator | Monday 09 February 2026 03:26:29 +0000 (0:00:00.582) 0:00:28.661 ******* 2026-02-09 03:28:00.228118 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:28:00.228129 | orchestrator | 2026-02-09 03:28:00.228139 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-09 03:28:00.228150 | orchestrator | Monday 09 February 2026 03:26:29 +0000 (0:00:00.254) 0:00:28.916 ******* 2026-02-09 03:28:00.228161 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:28:00.228172 | orchestrator | 2026-02-09 03:28:00.228183 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-09 03:28:00.228194 | orchestrator | Monday 09 February 2026 03:26:31 +0000 (0:00:01.541) 0:00:30.457 ******* 2026-02-09 03:28:00.228205 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:28:00.228216 | orchestrator | 2026-02-09 03:28:00.228227 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-09 03:28:00.228238 | orchestrator | 2026-02-09 03:28:00.228249 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-09 03:28:00.228260 | orchestrator | Monday 09 February 2026 03:27:23 +0000 (0:00:52.074) 0:01:22.532 ******* 2026-02-09 03:28:00.228271 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:28:00.228282 | orchestrator | 2026-02-09 03:28:00.228293 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-09 03:28:00.228304 | orchestrator | Monday 09 February 2026 03:27:23 +0000 (0:00:00.618) 0:01:23.150 ******* 2026-02-09 03:28:00.228315 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:28:00.228325 | orchestrator | 2026-02-09 03:28:00.228336 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-09 03:28:00.228347 | orchestrator | Monday 09 February 2026 03:27:24 +0000 (0:00:00.239) 0:01:23.389 ******* 2026-02-09 03:28:00.228358 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:28:00.228372 | orchestrator | 2026-02-09 03:28:00.228384 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-09 03:28:00.228412 | orchestrator | Monday 09 February 2026 03:27:30 +0000 (0:00:06.587) 0:01:29.977 ******* 2026-02-09 03:28:00.228426 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:28:00.228467 | orchestrator | 2026-02-09 03:28:00.228481 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-09 03:28:00.228495 | orchestrator | 2026-02-09 03:28:00.228508 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-09 03:28:00.228520 | orchestrator | Monday 09 February 2026 03:27:39 +0000 (0:00:09.386) 0:01:39.364 ******* 2026-02-09 03:28:00.228533 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:28:00.228546 | orchestrator | 2026-02-09 03:28:00.228583 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-09 03:28:00.228602 | orchestrator | Monday 09 February 2026 03:27:40 +0000 (0:00:00.761) 0:01:40.125 ******* 2026-02-09 03:28:00.228621 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:28:00.228640 | orchestrator | 2026-02-09 03:28:00.228660 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-09 03:28:00.228680 | orchestrator | Monday 09 February 2026 03:27:40 +0000 (0:00:00.233) 0:01:40.359 ******* 2026-02-09 03:28:00.228699 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:28:00.228715 | orchestrator | 2026-02-09 03:28:00.228729 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-09 03:28:00.228742 | orchestrator | Monday 09 February 2026 03:27:47 +0000 (0:00:06.557) 0:01:46.916 ******* 2026-02-09 03:28:00.228753 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:28:00.228764 | orchestrator | 2026-02-09 03:28:00.228775 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-09 03:28:00.228785 | orchestrator | 2026-02-09 03:28:00.228796 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-09 03:28:00.228807 | orchestrator | Monday 09 February 2026 03:27:57 +0000 (0:00:09.567) 0:01:56.484 ******* 2026-02-09 03:28:00.228817 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:28:00.228828 | orchestrator | 2026-02-09 03:28:00.228838 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-09 03:28:00.228849 | orchestrator | Monday 09 February 2026 03:27:57 +0000 (0:00:00.512) 0:01:56.996 ******* 2026-02-09 03:28:00.228860 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-09 03:28:00.228870 | orchestrator | enable_outward_rabbitmq_True 2026-02-09 03:28:00.228881 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-09 03:28:00.228891 | orchestrator | outward_rabbitmq_restart 2026-02-09 03:28:00.228902 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:28:00.228913 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:28:00.228923 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:28:00.228934 | orchestrator | 2026-02-09 03:28:00.228945 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-09 03:28:00.228956 | orchestrator | skipping: no hosts matched 2026-02-09 03:28:00.228966 | orchestrator | 2026-02-09 03:28:00.228977 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-09 03:28:00.228988 | orchestrator | skipping: no hosts matched 2026-02-09 03:28:00.228999 | orchestrator | 2026-02-09 03:28:00.229009 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-09 03:28:00.229020 | orchestrator | skipping: no hosts matched 2026-02-09 03:28:00.229030 | orchestrator | 2026-02-09 03:28:00.229041 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:28:00.229070 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-09 03:28:00.229083 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:28:00.229094 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:28:00.229104 | orchestrator | 2026-02-09 03:28:00.229115 | orchestrator | 2026-02-09 03:28:00.229126 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:28:00.229137 | orchestrator | Monday 09 February 2026 03:27:59 +0000 (0:00:02.205) 0:01:59.201 ******* 2026-02-09 03:28:00.229147 | orchestrator | =============================================================================== 2026-02-09 03:28:00.229158 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 71.03s 2026-02-09 03:28:00.229169 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 14.69s 2026-02-09 03:28:00.229188 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.69s 2026-02-09 03:28:00.229199 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.21s 2026-02-09 03:28:00.229210 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.96s 2026-02-09 03:28:00.229221 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.75s 2026-02-09 03:28:00.229231 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.67s 2026-02-09 03:28:00.229242 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.63s 2026-02-09 03:28:00.229253 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.54s 2026-02-09 03:28:00.229264 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.50s 2026-02-09 03:28:00.229274 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.37s 2026-02-09 03:28:00.229285 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.37s 2026-02-09 03:28:00.229295 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.19s 2026-02-09 03:28:00.229306 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.96s 2026-02-09 03:28:00.229324 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.85s 2026-02-09 03:28:00.229336 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.85s 2026-02-09 03:28:00.229347 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.80s 2026-02-09 03:28:00.229358 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.79s 2026-02-09 03:28:00.229368 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.73s 2026-02-09 03:28:00.229379 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2026-02-09 03:28:02.821761 | orchestrator | 2026-02-09 03:28:02 | INFO  | Task 1125b91b-ce5b-4026-912b-8f530a082d4d (openvswitch) was prepared for execution. 2026-02-09 03:28:02.821905 | orchestrator | 2026-02-09 03:28:02 | INFO  | It takes a moment until task 1125b91b-ce5b-4026-912b-8f530a082d4d (openvswitch) has been started and output is visible here. 2026-02-09 03:28:15.880088 | orchestrator | 2026-02-09 03:28:15.880180 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 03:28:15.880191 | orchestrator | 2026-02-09 03:28:15.880198 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 03:28:15.880207 | orchestrator | Monday 09 February 2026 03:28:07 +0000 (0:00:00.270) 0:00:00.270 ******* 2026-02-09 03:28:15.880218 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:28:15.880230 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:28:15.880239 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:28:15.880249 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:28:15.880259 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:28:15.880270 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:28:15.880279 | orchestrator | 2026-02-09 03:28:15.880289 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 03:28:15.880299 | orchestrator | Monday 09 February 2026 03:28:07 +0000 (0:00:00.701) 0:00:00.971 ******* 2026-02-09 03:28:15.880309 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 03:28:15.880320 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 03:28:15.880330 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 03:28:15.880340 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 03:28:15.880350 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 03:28:15.880360 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 03:28:15.880370 | orchestrator | 2026-02-09 03:28:15.880406 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-09 03:28:15.880458 | orchestrator | 2026-02-09 03:28:15.880469 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-09 03:28:15.880479 | orchestrator | Monday 09 February 2026 03:28:08 +0000 (0:00:00.635) 0:00:01.607 ******* 2026-02-09 03:28:15.880490 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:28:15.880501 | orchestrator | 2026-02-09 03:28:15.880511 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-09 03:28:15.880522 | orchestrator | Monday 09 February 2026 03:28:09 +0000 (0:00:01.395) 0:00:03.002 ******* 2026-02-09 03:28:15.880532 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-09 03:28:15.880543 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-09 03:28:15.880552 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-09 03:28:15.880562 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-09 03:28:15.880572 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-09 03:28:15.880582 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-09 03:28:15.880593 | orchestrator | 2026-02-09 03:28:15.880602 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-09 03:28:15.880612 | orchestrator | Monday 09 February 2026 03:28:11 +0000 (0:00:01.237) 0:00:04.239 ******* 2026-02-09 03:28:15.880621 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-09 03:28:15.880631 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-09 03:28:15.880642 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-09 03:28:15.880652 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-09 03:28:15.880662 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-09 03:28:15.880673 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-09 03:28:15.880683 | orchestrator | 2026-02-09 03:28:15.880694 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-09 03:28:15.880705 | orchestrator | Monday 09 February 2026 03:28:12 +0000 (0:00:01.440) 0:00:05.680 ******* 2026-02-09 03:28:15.880716 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-09 03:28:15.880727 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:28:15.880738 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-09 03:28:15.880749 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:28:15.880760 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-09 03:28:15.880770 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:28:15.880780 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-09 03:28:15.880790 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:28:15.880800 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-09 03:28:15.880811 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:28:15.880821 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-09 03:28:15.880831 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:28:15.880842 | orchestrator | 2026-02-09 03:28:15.880853 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-09 03:28:15.880863 | orchestrator | Monday 09 February 2026 03:28:13 +0000 (0:00:01.182) 0:00:06.862 ******* 2026-02-09 03:28:15.880873 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:28:15.880883 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:28:15.880893 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:28:15.880903 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:28:15.880913 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:28:15.880922 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:28:15.880933 | orchestrator | 2026-02-09 03:28:15.880943 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-09 03:28:15.880969 | orchestrator | Monday 09 February 2026 03:28:14 +0000 (0:00:00.772) 0:00:07.634 ******* 2026-02-09 03:28:15.881009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:15.881027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:15.881038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:15.881136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:15.881165 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:15.881186 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226666 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226781 | orchestrator | 2026-02-09 03:28:18.226803 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-09 03:28:18.226823 | orchestrator | Monday 09 February 2026 03:28:15 +0000 (0:00:01.382) 0:00:09.017 ******* 2026-02-09 03:28:18.226841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:18.226972 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:20.838953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:20.839049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:20.839064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:20.839088 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:20.839118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:20.839145 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:20.839155 | orchestrator | 2026-02-09 03:28:20.839164 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-09 03:28:20.839173 | orchestrator | Monday 09 February 2026 03:28:18 +0000 (0:00:02.352) 0:00:11.370 ******* 2026-02-09 03:28:20.839180 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:28:20.839189 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:28:20.839197 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:28:20.839204 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:28:20.839211 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:28:20.839219 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:28:20.839228 | orchestrator | 2026-02-09 03:28:20.839236 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-09 03:28:20.839244 | orchestrator | Monday 09 February 2026 03:28:19 +0000 (0:00:00.987) 0:00:12.358 ******* 2026-02-09 03:28:20.839252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:20.839262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:20.839286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:20.839296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:20.839314 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:40.509094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 03:28:40.509186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:40.509196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:40.509234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:40.509241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:40.509272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:40.509279 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 03:28:40.509286 | orchestrator | 2026-02-09 03:28:40.509293 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 03:28:40.509301 | orchestrator | Monday 09 February 2026 03:28:20 +0000 (0:00:01.624) 0:00:13.982 ******* 2026-02-09 03:28:40.509307 | orchestrator | 2026-02-09 03:28:40.509313 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 03:28:40.509319 | orchestrator | Monday 09 February 2026 03:28:21 +0000 (0:00:00.315) 0:00:14.297 ******* 2026-02-09 03:28:40.509330 | orchestrator | 2026-02-09 03:28:40.509336 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 03:28:40.509342 | orchestrator | Monday 09 February 2026 03:28:21 +0000 (0:00:00.133) 0:00:14.431 ******* 2026-02-09 03:28:40.509347 | orchestrator | 2026-02-09 03:28:40.509353 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 03:28:40.509359 | orchestrator | Monday 09 February 2026 03:28:21 +0000 (0:00:00.131) 0:00:14.562 ******* 2026-02-09 03:28:40.509365 | orchestrator | 2026-02-09 03:28:40.509371 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 03:28:40.509376 | orchestrator | Monday 09 February 2026 03:28:21 +0000 (0:00:00.136) 0:00:14.698 ******* 2026-02-09 03:28:40.509400 | orchestrator | 2026-02-09 03:28:40.509406 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 03:28:40.509412 | orchestrator | Monday 09 February 2026 03:28:21 +0000 (0:00:00.150) 0:00:14.849 ******* 2026-02-09 03:28:40.509418 | orchestrator | 2026-02-09 03:28:40.509423 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-09 03:28:40.509429 | orchestrator | Monday 09 February 2026 03:28:21 +0000 (0:00:00.131) 0:00:14.980 ******* 2026-02-09 03:28:40.509435 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:28:40.509442 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:28:40.509448 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:28:40.509453 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:28:40.509459 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:28:40.509465 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:28:40.509471 | orchestrator | 2026-02-09 03:28:40.509477 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-09 03:28:40.509483 | orchestrator | Monday 09 February 2026 03:28:25 +0000 (0:00:03.791) 0:00:18.772 ******* 2026-02-09 03:28:40.509493 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:28:40.509500 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:28:40.509506 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:28:40.509511 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:28:40.509517 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:28:40.509523 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:28:40.509528 | orchestrator | 2026-02-09 03:28:40.509534 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-09 03:28:40.509540 | orchestrator | Monday 09 February 2026 03:28:26 +0000 (0:00:01.060) 0:00:19.832 ******* 2026-02-09 03:28:40.509546 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:28:40.509552 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:28:40.509558 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:28:40.509563 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:28:40.509569 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:28:40.509575 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:28:40.509580 | orchestrator | 2026-02-09 03:28:40.509586 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-09 03:28:40.509592 | orchestrator | Monday 09 February 2026 03:28:34 +0000 (0:00:07.697) 0:00:27.530 ******* 2026-02-09 03:28:40.509599 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-09 03:28:40.509606 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-09 03:28:40.509613 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-09 03:28:40.509620 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-09 03:28:40.509627 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-09 03:28:40.509634 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-09 03:28:40.509641 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-09 03:28:40.509657 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-09 03:28:53.493202 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-09 03:28:53.493340 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-09 03:28:53.493459 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-09 03:28:53.493484 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-09 03:28:53.493504 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 03:28:53.493522 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 03:28:53.493540 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 03:28:53.493558 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 03:28:53.493571 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 03:28:53.493585 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 03:28:53.493605 | orchestrator | 2026-02-09 03:28:53.493624 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-09 03:28:53.493644 | orchestrator | Monday 09 February 2026 03:28:40 +0000 (0:00:06.031) 0:00:33.562 ******* 2026-02-09 03:28:53.493663 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-09 03:28:53.493682 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:28:53.493703 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-09 03:28:53.493723 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:28:53.493743 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-09 03:28:53.493763 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:28:53.493779 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-09 03:28:53.493792 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-09 03:28:53.493804 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-09 03:28:53.493817 | orchestrator | 2026-02-09 03:28:53.493831 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-09 03:28:53.493844 | orchestrator | Monday 09 February 2026 03:28:42 +0000 (0:00:02.350) 0:00:35.912 ******* 2026-02-09 03:28:53.493856 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-09 03:28:53.493869 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:28:53.493882 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-09 03:28:53.493894 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:28:53.493906 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-09 03:28:53.493918 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:28:53.493931 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-09 03:28:53.493944 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-09 03:28:53.493973 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-09 03:28:53.493987 | orchestrator | 2026-02-09 03:28:53.494000 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-09 03:28:53.494013 | orchestrator | Monday 09 February 2026 03:28:45 +0000 (0:00:03.078) 0:00:38.991 ******* 2026-02-09 03:28:53.494096 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:28:53.494108 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:28:53.494145 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:28:53.494157 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:28:53.494167 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:28:53.494178 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:28:53.494189 | orchestrator | 2026-02-09 03:28:53.494200 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:28:53.494212 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 03:28:53.494225 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 03:28:53.494236 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 03:28:53.494247 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 03:28:53.494258 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 03:28:53.494269 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 03:28:53.494279 | orchestrator | 2026-02-09 03:28:53.494290 | orchestrator | 2026-02-09 03:28:53.494301 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:28:53.494312 | orchestrator | Monday 09 February 2026 03:28:53 +0000 (0:00:07.112) 0:00:46.104 ******* 2026-02-09 03:28:53.494344 | orchestrator | =============================================================================== 2026-02-09 03:28:53.494356 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.81s 2026-02-09 03:28:53.494397 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.03s 2026-02-09 03:28:53.494409 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 3.79s 2026-02-09 03:28:53.494419 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.08s 2026-02-09 03:28:53.494430 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.35s 2026-02-09 03:28:53.494440 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.35s 2026-02-09 03:28:53.494451 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.62s 2026-02-09 03:28:53.494462 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.44s 2026-02-09 03:28:53.494472 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.40s 2026-02-09 03:28:53.494483 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.38s 2026-02-09 03:28:53.494494 | orchestrator | module-load : Load modules ---------------------------------------------- 1.24s 2026-02-09 03:28:53.494505 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.18s 2026-02-09 03:28:53.494515 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.06s 2026-02-09 03:28:53.494526 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.00s 2026-02-09 03:28:53.494537 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.99s 2026-02-09 03:28:53.494547 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.77s 2026-02-09 03:28:53.494558 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.70s 2026-02-09 03:28:53.494569 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.64s 2026-02-09 03:28:55.949394 | orchestrator | 2026-02-09 03:28:55 | INFO  | Task c5be7ddf-c0be-4adf-9532-787220f9ef99 (ovn) was prepared for execution. 2026-02-09 03:28:55.952082 | orchestrator | 2026-02-09 03:28:55 | INFO  | It takes a moment until task c5be7ddf-c0be-4adf-9532-787220f9ef99 (ovn) has been started and output is visible here. 2026-02-09 03:29:06.852540 | orchestrator | 2026-02-09 03:29:06.852676 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 03:29:06.852701 | orchestrator | 2026-02-09 03:29:06.852722 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 03:29:06.852743 | orchestrator | Monday 09 February 2026 03:29:00 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-02-09 03:29:06.852762 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:29:06.852783 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:29:06.852795 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:29:06.852806 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:29:06.852817 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:29:06.852828 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:29:06.852839 | orchestrator | 2026-02-09 03:29:06.852850 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 03:29:06.852863 | orchestrator | Monday 09 February 2026 03:29:01 +0000 (0:00:00.716) 0:00:00.903 ******* 2026-02-09 03:29:06.852903 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-09 03:29:06.852924 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-09 03:29:06.852943 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-09 03:29:06.852961 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-09 03:29:06.852979 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-09 03:29:06.852998 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-09 03:29:06.853016 | orchestrator | 2026-02-09 03:29:06.853034 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-09 03:29:06.853053 | orchestrator | 2026-02-09 03:29:06.853072 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-09 03:29:06.853093 | orchestrator | Monday 09 February 2026 03:29:01 +0000 (0:00:00.836) 0:00:01.740 ******* 2026-02-09 03:29:06.853113 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:29:06.853133 | orchestrator | 2026-02-09 03:29:06.853153 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-09 03:29:06.853172 | orchestrator | Monday 09 February 2026 03:29:03 +0000 (0:00:01.154) 0:00:02.895 ******* 2026-02-09 03:29:06.853194 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853216 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853235 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853448 | orchestrator | 2026-02-09 03:29:06.853466 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-09 03:29:06.853485 | orchestrator | Monday 09 February 2026 03:29:04 +0000 (0:00:01.186) 0:00:04.081 ******* 2026-02-09 03:29:06.853516 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853555 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853640 | orchestrator | 2026-02-09 03:29:06.853659 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-09 03:29:06.853678 | orchestrator | Monday 09 February 2026 03:29:05 +0000 (0:00:01.474) 0:00:05.556 ******* 2026-02-09 03:29:06.853697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:06.853749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295299 | orchestrator | 2026-02-09 03:29:30.295340 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-09 03:29:30.295354 | orchestrator | Monday 09 February 2026 03:29:06 +0000 (0:00:01.135) 0:00:06.691 ******* 2026-02-09 03:29:30.295365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295375 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295415 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295477 | orchestrator | 2026-02-09 03:29:30.295487 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-09 03:29:30.295497 | orchestrator | Monday 09 February 2026 03:29:08 +0000 (0:00:01.614) 0:00:08.306 ******* 2026-02-09 03:29:30.295514 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:29:30.295581 | orchestrator | 2026-02-09 03:29:30.295591 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-09 03:29:30.295601 | orchestrator | Monday 09 February 2026 03:29:09 +0000 (0:00:01.472) 0:00:09.778 ******* 2026-02-09 03:29:30.295611 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:29:30.295623 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:29:30.295632 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:29:30.295642 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:29:30.295651 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:29:30.295660 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:29:30.295670 | orchestrator | 2026-02-09 03:29:30.295680 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-09 03:29:30.295689 | orchestrator | Monday 09 February 2026 03:29:12 +0000 (0:00:02.399) 0:00:12.177 ******* 2026-02-09 03:29:30.295699 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-09 03:29:30.295710 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-09 03:29:30.295720 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-09 03:29:30.295729 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-09 03:29:30.295738 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-09 03:29:30.295748 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-09 03:29:30.295764 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 03:30:09.506010 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 03:30:09.506194 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 03:30:09.506228 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 03:30:09.506242 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 03:30:09.506253 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 03:30:09.506298 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-09 03:30:09.506312 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-09 03:30:09.506350 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-09 03:30:09.506362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-09 03:30:09.506373 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-09 03:30:09.506383 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-09 03:30:09.506395 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 03:30:09.506407 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 03:30:09.506431 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 03:30:09.506453 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 03:30:09.506466 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 03:30:09.506477 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 03:30:09.506488 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 03:30:09.506499 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 03:30:09.506509 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 03:30:09.506520 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 03:30:09.506533 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 03:30:09.506546 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 03:30:09.506559 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 03:30:09.506572 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 03:30:09.506590 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 03:30:09.506610 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 03:30:09.506624 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 03:30:09.506637 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 03:30:09.506649 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-09 03:30:09.506663 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-09 03:30:09.506676 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-09 03:30:09.506689 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-09 03:30:09.506702 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-09 03:30:09.506715 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-09 03:30:09.506729 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-09 03:30:09.506771 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-09 03:30:09.506785 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-09 03:30:09.506805 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-09 03:30:09.506819 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-09 03:30:09.506831 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-09 03:30:09.506845 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-09 03:30:09.506857 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-09 03:30:09.506871 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-09 03:30:09.506885 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-09 03:30:09.506896 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-09 03:30:09.506907 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-09 03:30:09.506918 | orchestrator | 2026-02-09 03:30:09.506930 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 03:30:09.506940 | orchestrator | Monday 09 February 2026 03:29:29 +0000 (0:00:17.367) 0:00:29.544 ******* 2026-02-09 03:30:09.506951 | orchestrator | 2026-02-09 03:30:09.506962 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 03:30:09.506973 | orchestrator | Monday 09 February 2026 03:29:29 +0000 (0:00:00.253) 0:00:29.798 ******* 2026-02-09 03:30:09.506984 | orchestrator | 2026-02-09 03:30:09.506994 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 03:30:09.507005 | orchestrator | Monday 09 February 2026 03:29:30 +0000 (0:00:00.064) 0:00:29.862 ******* 2026-02-09 03:30:09.507015 | orchestrator | 2026-02-09 03:30:09.507026 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 03:30:09.507037 | orchestrator | Monday 09 February 2026 03:29:30 +0000 (0:00:00.064) 0:00:29.926 ******* 2026-02-09 03:30:09.507047 | orchestrator | 2026-02-09 03:30:09.507058 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 03:30:09.507069 | orchestrator | Monday 09 February 2026 03:29:30 +0000 (0:00:00.064) 0:00:29.990 ******* 2026-02-09 03:30:09.507079 | orchestrator | 2026-02-09 03:30:09.507090 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 03:30:09.507100 | orchestrator | Monday 09 February 2026 03:29:30 +0000 (0:00:00.065) 0:00:30.056 ******* 2026-02-09 03:30:09.507111 | orchestrator | 2026-02-09 03:30:09.507122 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-09 03:30:09.507133 | orchestrator | Monday 09 February 2026 03:29:30 +0000 (0:00:00.070) 0:00:30.126 ******* 2026-02-09 03:30:09.507143 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:30:09.507155 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:30:09.507166 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:30:09.507176 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:09.507187 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:09.507198 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:09.507208 | orchestrator | 2026-02-09 03:30:09.507219 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-09 03:30:09.507230 | orchestrator | Monday 09 February 2026 03:29:31 +0000 (0:00:01.523) 0:00:31.650 ******* 2026-02-09 03:30:09.507248 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:30:09.507259 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:30:09.507300 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:30:09.507318 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:30:09.507336 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:30:09.507355 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:30:09.507372 | orchestrator | 2026-02-09 03:30:09.507390 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-09 03:30:09.507401 | orchestrator | 2026-02-09 03:30:09.507412 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-09 03:30:09.507422 | orchestrator | Monday 09 February 2026 03:30:07 +0000 (0:00:35.405) 0:01:07.055 ******* 2026-02-09 03:30:09.507433 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:30:09.507444 | orchestrator | 2026-02-09 03:30:09.507454 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-09 03:30:09.507465 | orchestrator | Monday 09 February 2026 03:30:07 +0000 (0:00:00.744) 0:01:07.799 ******* 2026-02-09 03:30:09.507476 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:30:09.507487 | orchestrator | 2026-02-09 03:30:09.507497 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-09 03:30:09.507508 | orchestrator | Monday 09 February 2026 03:30:08 +0000 (0:00:00.610) 0:01:08.410 ******* 2026-02-09 03:30:09.507519 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:09.507529 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:09.507540 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:09.507551 | orchestrator | 2026-02-09 03:30:09.507562 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-09 03:30:09.507581 | orchestrator | Monday 09 February 2026 03:30:09 +0000 (0:00:00.929) 0:01:09.340 ******* 2026-02-09 03:30:21.297122 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:21.297215 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:21.297225 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:21.297232 | orchestrator | 2026-02-09 03:30:21.297240 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-09 03:30:21.297300 | orchestrator | Monday 09 February 2026 03:30:09 +0000 (0:00:00.344) 0:01:09.684 ******* 2026-02-09 03:30:21.297309 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:21.297316 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:21.297322 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:21.297329 | orchestrator | 2026-02-09 03:30:21.297335 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-09 03:30:21.297342 | orchestrator | Monday 09 February 2026 03:30:10 +0000 (0:00:00.414) 0:01:10.099 ******* 2026-02-09 03:30:21.297348 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:21.297356 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:21.297366 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:21.297377 | orchestrator | 2026-02-09 03:30:21.297387 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-09 03:30:21.297397 | orchestrator | Monday 09 February 2026 03:30:10 +0000 (0:00:00.376) 0:01:10.475 ******* 2026-02-09 03:30:21.297406 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:21.297415 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:21.297425 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:21.297435 | orchestrator | 2026-02-09 03:30:21.297445 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-09 03:30:21.297455 | orchestrator | Monday 09 February 2026 03:30:11 +0000 (0:00:00.555) 0:01:11.031 ******* 2026-02-09 03:30:21.297466 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297479 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297487 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297493 | orchestrator | 2026-02-09 03:30:21.297500 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-09 03:30:21.297527 | orchestrator | Monday 09 February 2026 03:30:11 +0000 (0:00:00.314) 0:01:11.345 ******* 2026-02-09 03:30:21.297534 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297540 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297546 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297552 | orchestrator | 2026-02-09 03:30:21.297558 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-09 03:30:21.297565 | orchestrator | Monday 09 February 2026 03:30:11 +0000 (0:00:00.327) 0:01:11.672 ******* 2026-02-09 03:30:21.297571 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297577 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297583 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297589 | orchestrator | 2026-02-09 03:30:21.297595 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-09 03:30:21.297601 | orchestrator | Monday 09 February 2026 03:30:12 +0000 (0:00:00.301) 0:01:11.974 ******* 2026-02-09 03:30:21.297607 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297614 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297620 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297626 | orchestrator | 2026-02-09 03:30:21.297632 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-09 03:30:21.297638 | orchestrator | Monday 09 February 2026 03:30:12 +0000 (0:00:00.314) 0:01:12.289 ******* 2026-02-09 03:30:21.297644 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297651 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297657 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297663 | orchestrator | 2026-02-09 03:30:21.297669 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-09 03:30:21.297675 | orchestrator | Monday 09 February 2026 03:30:12 +0000 (0:00:00.540) 0:01:12.830 ******* 2026-02-09 03:30:21.297681 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297687 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297693 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297699 | orchestrator | 2026-02-09 03:30:21.297705 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-09 03:30:21.297711 | orchestrator | Monday 09 February 2026 03:30:13 +0000 (0:00:00.301) 0:01:13.131 ******* 2026-02-09 03:30:21.297718 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297724 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297730 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297736 | orchestrator | 2026-02-09 03:30:21.297742 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-09 03:30:21.297748 | orchestrator | Monday 09 February 2026 03:30:13 +0000 (0:00:00.363) 0:01:13.494 ******* 2026-02-09 03:30:21.297754 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297761 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297767 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297773 | orchestrator | 2026-02-09 03:30:21.297779 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-09 03:30:21.297785 | orchestrator | Monday 09 February 2026 03:30:13 +0000 (0:00:00.336) 0:01:13.830 ******* 2026-02-09 03:30:21.297791 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297797 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297803 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297809 | orchestrator | 2026-02-09 03:30:21.297815 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-09 03:30:21.297822 | orchestrator | Monday 09 February 2026 03:30:14 +0000 (0:00:00.519) 0:01:14.350 ******* 2026-02-09 03:30:21.297828 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297834 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297840 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297846 | orchestrator | 2026-02-09 03:30:21.297852 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-09 03:30:21.297864 | orchestrator | Monday 09 February 2026 03:30:14 +0000 (0:00:00.307) 0:01:14.658 ******* 2026-02-09 03:30:21.297871 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297877 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297883 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297889 | orchestrator | 2026-02-09 03:30:21.297895 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-09 03:30:21.297901 | orchestrator | Monday 09 February 2026 03:30:15 +0000 (0:00:00.305) 0:01:14.963 ******* 2026-02-09 03:30:21.297924 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.297931 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.297937 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.297943 | orchestrator | 2026-02-09 03:30:21.297949 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-09 03:30:21.297961 | orchestrator | Monday 09 February 2026 03:30:15 +0000 (0:00:00.297) 0:01:15.261 ******* 2026-02-09 03:30:21.297968 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:30:21.297974 | orchestrator | 2026-02-09 03:30:21.297981 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-09 03:30:21.297987 | orchestrator | Monday 09 February 2026 03:30:16 +0000 (0:00:00.827) 0:01:16.088 ******* 2026-02-09 03:30:21.297993 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:21.297999 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:21.298005 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:21.298011 | orchestrator | 2026-02-09 03:30:21.298061 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-09 03:30:21.298067 | orchestrator | Monday 09 February 2026 03:30:16 +0000 (0:00:00.493) 0:01:16.582 ******* 2026-02-09 03:30:21.298074 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:21.298080 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:21.298086 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:21.298092 | orchestrator | 2026-02-09 03:30:21.298098 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-09 03:30:21.298105 | orchestrator | Monday 09 February 2026 03:30:17 +0000 (0:00:00.459) 0:01:17.041 ******* 2026-02-09 03:30:21.298111 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.298117 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.298123 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.298129 | orchestrator | 2026-02-09 03:30:21.298135 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-09 03:30:21.298142 | orchestrator | Monday 09 February 2026 03:30:17 +0000 (0:00:00.331) 0:01:17.373 ******* 2026-02-09 03:30:21.298148 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.298154 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.298160 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.298166 | orchestrator | 2026-02-09 03:30:21.298173 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-09 03:30:21.298179 | orchestrator | Monday 09 February 2026 03:30:18 +0000 (0:00:00.622) 0:01:17.996 ******* 2026-02-09 03:30:21.298185 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.298191 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.298202 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.298212 | orchestrator | 2026-02-09 03:30:21.298221 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-09 03:30:21.298231 | orchestrator | Monday 09 February 2026 03:30:18 +0000 (0:00:00.369) 0:01:18.366 ******* 2026-02-09 03:30:21.298241 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.298302 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.298315 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.298326 | orchestrator | 2026-02-09 03:30:21.298336 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-09 03:30:21.298347 | orchestrator | Monday 09 February 2026 03:30:18 +0000 (0:00:00.363) 0:01:18.729 ******* 2026-02-09 03:30:21.298368 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.298384 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.298396 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.298405 | orchestrator | 2026-02-09 03:30:21.298414 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-09 03:30:21.298424 | orchestrator | Monday 09 February 2026 03:30:19 +0000 (0:00:00.382) 0:01:19.112 ******* 2026-02-09 03:30:21.298434 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:21.298444 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:21.298455 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:21.298461 | orchestrator | 2026-02-09 03:30:21.298468 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-09 03:30:21.298474 | orchestrator | Monday 09 February 2026 03:30:19 +0000 (0:00:00.597) 0:01:19.710 ******* 2026-02-09 03:30:21.298483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:21.298496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:21.298505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:21.298546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453339 | orchestrator | 2026-02-09 03:30:27.453346 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-09 03:30:27.453354 | orchestrator | Monday 09 February 2026 03:30:21 +0000 (0:00:01.428) 0:01:21.138 ******* 2026-02-09 03:30:27.453364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453436 | orchestrator | 2026-02-09 03:30:27.453440 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-09 03:30:27.453444 | orchestrator | Monday 09 February 2026 03:30:25 +0000 (0:00:03.778) 0:01:24.917 ******* 2026-02-09 03:30:27.453448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:27.453475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.995797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.995892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.995899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.995903 | orchestrator | 2026-02-09 03:30:54.995908 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-09 03:30:54.995913 | orchestrator | Monday 09 February 2026 03:30:27 +0000 (0:00:01.998) 0:01:26.916 ******* 2026-02-09 03:30:54.995917 | orchestrator | 2026-02-09 03:30:54.995921 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-09 03:30:54.995925 | orchestrator | Monday 09 February 2026 03:30:27 +0000 (0:00:00.072) 0:01:26.988 ******* 2026-02-09 03:30:54.995929 | orchestrator | 2026-02-09 03:30:54.995932 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-09 03:30:54.995936 | orchestrator | Monday 09 February 2026 03:30:27 +0000 (0:00:00.224) 0:01:27.213 ******* 2026-02-09 03:30:54.995940 | orchestrator | 2026-02-09 03:30:54.995943 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-09 03:30:54.995947 | orchestrator | Monday 09 February 2026 03:30:27 +0000 (0:00:00.077) 0:01:27.290 ******* 2026-02-09 03:30:54.995951 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:30:54.995956 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:30:54.995959 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:30:54.995963 | orchestrator | 2026-02-09 03:30:54.995967 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-09 03:30:54.995971 | orchestrator | Monday 09 February 2026 03:30:33 +0000 (0:00:06.486) 0:01:33.777 ******* 2026-02-09 03:30:54.995974 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:30:54.995978 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:30:54.995982 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:30:54.995985 | orchestrator | 2026-02-09 03:30:54.995989 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-09 03:30:54.995993 | orchestrator | Monday 09 February 2026 03:30:40 +0000 (0:00:06.537) 0:01:40.314 ******* 2026-02-09 03:30:54.995996 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:30:54.996000 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:30:54.996004 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:30:54.996007 | orchestrator | 2026-02-09 03:30:54.996011 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-09 03:30:54.996015 | orchestrator | Monday 09 February 2026 03:30:48 +0000 (0:00:07.577) 0:01:47.892 ******* 2026-02-09 03:30:54.996018 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:30:54.996022 | orchestrator | 2026-02-09 03:30:54.996026 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-09 03:30:54.996030 | orchestrator | Monday 09 February 2026 03:30:48 +0000 (0:00:00.114) 0:01:48.007 ******* 2026-02-09 03:30:54.996033 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:54.996038 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:54.996042 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:54.996046 | orchestrator | 2026-02-09 03:30:54.996049 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-09 03:30:54.996053 | orchestrator | Monday 09 February 2026 03:30:49 +0000 (0:00:01.051) 0:01:49.058 ******* 2026-02-09 03:30:54.996057 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:54.996065 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:54.996068 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:30:54.996072 | orchestrator | 2026-02-09 03:30:54.996076 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-09 03:30:54.996079 | orchestrator | Monday 09 February 2026 03:30:49 +0000 (0:00:00.598) 0:01:49.658 ******* 2026-02-09 03:30:54.996083 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:54.996087 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:54.996090 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:54.996094 | orchestrator | 2026-02-09 03:30:54.996098 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-09 03:30:54.996110 | orchestrator | Monday 09 February 2026 03:30:50 +0000 (0:00:00.757) 0:01:50.415 ******* 2026-02-09 03:30:54.996114 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:30:54.996118 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:30:54.996122 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:30:54.996125 | orchestrator | 2026-02-09 03:30:54.996129 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-09 03:30:54.996133 | orchestrator | Monday 09 February 2026 03:30:51 +0000 (0:00:00.590) 0:01:51.005 ******* 2026-02-09 03:30:54.996136 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:54.996140 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:54.996153 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:54.996157 | orchestrator | 2026-02-09 03:30:54.996161 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-09 03:30:54.996164 | orchestrator | Monday 09 February 2026 03:30:52 +0000 (0:00:01.240) 0:01:52.245 ******* 2026-02-09 03:30:54.996168 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:54.996172 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:54.996176 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:54.996179 | orchestrator | 2026-02-09 03:30:54.996183 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-09 03:30:54.996187 | orchestrator | Monday 09 February 2026 03:30:53 +0000 (0:00:00.806) 0:01:53.052 ******* 2026-02-09 03:30:54.996191 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:30:54.996194 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:30:54.996198 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:30:54.996202 | orchestrator | 2026-02-09 03:30:54.996234 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-09 03:30:54.996238 | orchestrator | Monday 09 February 2026 03:30:53 +0000 (0:00:00.326) 0:01:53.378 ******* 2026-02-09 03:30:54.996243 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.996249 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.996253 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.996263 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.996270 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.996274 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.996278 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.996285 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:30:54.996295 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.553711 | orchestrator | 2026-02-09 03:31:02.553831 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-09 03:31:02.553848 | orchestrator | Monday 09 February 2026 03:30:54 +0000 (0:00:01.451) 0:01:54.830 ******* 2026-02-09 03:31:02.553863 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.553878 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.553890 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.553901 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.553940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.553952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.553963 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.553975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.554001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.554013 | orchestrator | 2026-02-09 03:31:02.554126 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-09 03:31:02.554146 | orchestrator | Monday 09 February 2026 03:30:59 +0000 (0:00:04.177) 0:01:59.008 ******* 2026-02-09 03:31:02.554193 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.554236 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.554251 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.554265 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.554293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.554306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.554321 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.554334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.554354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 03:31:02.554369 | orchestrator | 2026-02-09 03:31:02.554383 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-09 03:31:02.554401 | orchestrator | Monday 09 February 2026 03:31:02 +0000 (0:00:03.055) 0:02:02.063 ******* 2026-02-09 03:31:02.554424 | orchestrator | 2026-02-09 03:31:02.554450 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-09 03:31:02.554468 | orchestrator | Monday 09 February 2026 03:31:02 +0000 (0:00:00.094) 0:02:02.158 ******* 2026-02-09 03:31:02.554486 | orchestrator | 2026-02-09 03:31:02.554504 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-09 03:31:02.554524 | orchestrator | Monday 09 February 2026 03:31:02 +0000 (0:00:00.118) 0:02:02.277 ******* 2026-02-09 03:31:02.554542 | orchestrator | 2026-02-09 03:31:02.554573 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-09 03:31:26.639838 | orchestrator | Monday 09 February 2026 03:31:02 +0000 (0:00:00.103) 0:02:02.380 ******* 2026-02-09 03:31:26.639913 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:31:26.639920 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:31:26.639925 | orchestrator | 2026-02-09 03:31:26.639930 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-09 03:31:26.639935 | orchestrator | Monday 09 February 2026 03:31:08 +0000 (0:00:06.226) 0:02:08.606 ******* 2026-02-09 03:31:26.639939 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:31:26.639943 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:31:26.639947 | orchestrator | 2026-02-09 03:31:26.639951 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-09 03:31:26.639972 | orchestrator | Monday 09 February 2026 03:31:14 +0000 (0:00:06.143) 0:02:14.750 ******* 2026-02-09 03:31:26.639976 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:31:26.639980 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:31:26.639984 | orchestrator | 2026-02-09 03:31:26.639988 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-09 03:31:26.639994 | orchestrator | Monday 09 February 2026 03:31:21 +0000 (0:00:06.180) 0:02:20.931 ******* 2026-02-09 03:31:26.640000 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:31:26.640006 | orchestrator | 2026-02-09 03:31:26.640012 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-09 03:31:26.640018 | orchestrator | Monday 09 February 2026 03:31:21 +0000 (0:00:00.134) 0:02:21.065 ******* 2026-02-09 03:31:26.640024 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:31:26.640031 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:31:26.640037 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:31:26.640043 | orchestrator | 2026-02-09 03:31:26.640048 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-09 03:31:26.640054 | orchestrator | Monday 09 February 2026 03:31:22 +0000 (0:00:00.995) 0:02:22.061 ******* 2026-02-09 03:31:26.640060 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:31:26.640066 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:31:26.640072 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:31:26.640078 | orchestrator | 2026-02-09 03:31:26.640085 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-09 03:31:26.640092 | orchestrator | Monday 09 February 2026 03:31:22 +0000 (0:00:00.620) 0:02:22.681 ******* 2026-02-09 03:31:26.640099 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:31:26.640106 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:31:26.640113 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:31:26.640119 | orchestrator | 2026-02-09 03:31:26.640125 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-09 03:31:26.640132 | orchestrator | Monday 09 February 2026 03:31:23 +0000 (0:00:00.796) 0:02:23.478 ******* 2026-02-09 03:31:26.640139 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:31:26.640145 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:31:26.640152 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:31:26.640159 | orchestrator | 2026-02-09 03:31:26.640166 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-09 03:31:26.640173 | orchestrator | Monday 09 February 2026 03:31:24 +0000 (0:00:00.622) 0:02:24.100 ******* 2026-02-09 03:31:26.640197 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:31:26.640203 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:31:26.640209 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:31:26.640214 | orchestrator | 2026-02-09 03:31:26.640221 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-09 03:31:26.640227 | orchestrator | Monday 09 February 2026 03:31:25 +0000 (0:00:01.022) 0:02:25.123 ******* 2026-02-09 03:31:26.640230 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:31:26.640234 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:31:26.640238 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:31:26.640242 | orchestrator | 2026-02-09 03:31:26.640245 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:31:26.640251 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-09 03:31:26.640256 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-09 03:31:26.640260 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-09 03:31:26.640264 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:31:26.640275 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:31:26.640279 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:31:26.640282 | orchestrator | 2026-02-09 03:31:26.640286 | orchestrator | 2026-02-09 03:31:26.640300 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:31:26.640304 | orchestrator | Monday 09 February 2026 03:31:26 +0000 (0:00:00.921) 0:02:26.044 ******* 2026-02-09 03:31:26.640308 | orchestrator | =============================================================================== 2026-02-09 03:31:26.640312 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.41s 2026-02-09 03:31:26.640316 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.37s 2026-02-09 03:31:26.640319 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.76s 2026-02-09 03:31:26.640323 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 12.71s 2026-02-09 03:31:26.640327 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.68s 2026-02-09 03:31:26.640347 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.18s 2026-02-09 03:31:26.640353 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.78s 2026-02-09 03:31:26.640359 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.06s 2026-02-09 03:31:26.640364 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.40s 2026-02-09 03:31:26.640370 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.00s 2026-02-09 03:31:26.640375 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.61s 2026-02-09 03:31:26.640381 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.52s 2026-02-09 03:31:26.640386 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.47s 2026-02-09 03:31:26.640393 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.47s 2026-02-09 03:31:26.640398 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2026-02-09 03:31:26.640405 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2026-02-09 03:31:26.640411 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.24s 2026-02-09 03:31:26.640417 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.19s 2026-02-09 03:31:26.640424 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.15s 2026-02-09 03:31:26.640430 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.14s 2026-02-09 03:31:27.036305 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-09 03:31:27.036394 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-09 03:31:29.443697 | orchestrator | 2026-02-09 03:31:29 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-09 03:31:39.570362 | orchestrator | 2026-02-09 03:31:39 | INFO  | Task 8f8ec3a2-5d02-4001-ab8d-c5230ed4cb6a (wipe-partitions) was prepared for execution. 2026-02-09 03:31:39.570455 | orchestrator | 2026-02-09 03:31:39 | INFO  | It takes a moment until task 8f8ec3a2-5d02-4001-ab8d-c5230ed4cb6a (wipe-partitions) has been started and output is visible here. 2026-02-09 03:31:52.462923 | orchestrator | 2026-02-09 03:31:52.463003 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-09 03:31:52.463010 | orchestrator | 2026-02-09 03:31:52.463016 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-09 03:31:52.463021 | orchestrator | Monday 09 February 2026 03:31:43 +0000 (0:00:00.135) 0:00:00.135 ******* 2026-02-09 03:31:52.463046 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:31:52.463052 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:31:52.463057 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:31:52.463061 | orchestrator | 2026-02-09 03:31:52.463066 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-09 03:31:52.463071 | orchestrator | Monday 09 February 2026 03:31:44 +0000 (0:00:00.623) 0:00:00.759 ******* 2026-02-09 03:31:52.463075 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:31:52.463080 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:31:52.463085 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:31:52.463089 | orchestrator | 2026-02-09 03:31:52.463094 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-09 03:31:52.463099 | orchestrator | Monday 09 February 2026 03:31:44 +0000 (0:00:00.417) 0:00:01.177 ******* 2026-02-09 03:31:52.463103 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:31:52.463109 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:31:52.463114 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:31:52.463118 | orchestrator | 2026-02-09 03:31:52.463122 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-09 03:31:52.463127 | orchestrator | Monday 09 February 2026 03:31:45 +0000 (0:00:00.593) 0:00:01.771 ******* 2026-02-09 03:31:52.463132 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:31:52.463138 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:31:52.463146 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:31:52.463201 | orchestrator | 2026-02-09 03:31:52.463206 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-09 03:31:52.463210 | orchestrator | Monday 09 February 2026 03:31:45 +0000 (0:00:00.303) 0:00:02.074 ******* 2026-02-09 03:31:52.463214 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-09 03:31:52.463219 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-09 03:31:52.463223 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-09 03:31:52.463227 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-09 03:31:52.463231 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-09 03:31:52.463235 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-09 03:31:52.463249 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-09 03:31:52.463254 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-09 03:31:52.463258 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-09 03:31:52.463262 | orchestrator | 2026-02-09 03:31:52.463266 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-09 03:31:52.463270 | orchestrator | Monday 09 February 2026 03:31:47 +0000 (0:00:01.221) 0:00:03.296 ******* 2026-02-09 03:31:52.463275 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-09 03:31:52.463279 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-09 03:31:52.463283 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-09 03:31:52.463287 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-09 03:31:52.463291 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-09 03:31:52.463295 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-09 03:31:52.463299 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-09 03:31:52.463303 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-09 03:31:52.463307 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-09 03:31:52.463311 | orchestrator | 2026-02-09 03:31:52.463316 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-09 03:31:52.463320 | orchestrator | Monday 09 February 2026 03:31:48 +0000 (0:00:01.568) 0:00:04.864 ******* 2026-02-09 03:31:52.463324 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-09 03:31:52.463328 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-09 03:31:52.463332 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-09 03:31:52.463336 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-09 03:31:52.463345 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-09 03:31:52.463349 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-09 03:31:52.463353 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-09 03:31:52.463357 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-09 03:31:52.463361 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-09 03:31:52.463365 | orchestrator | 2026-02-09 03:31:52.463369 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-09 03:31:52.463373 | orchestrator | Monday 09 February 2026 03:31:50 +0000 (0:00:02.141) 0:00:07.006 ******* 2026-02-09 03:31:52.463378 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:31:52.463382 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:31:52.463386 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:31:52.463390 | orchestrator | 2026-02-09 03:31:52.463394 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-09 03:31:52.463398 | orchestrator | Monday 09 February 2026 03:31:51 +0000 (0:00:00.631) 0:00:07.638 ******* 2026-02-09 03:31:52.463402 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:31:52.463406 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:31:52.463410 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:31:52.463414 | orchestrator | 2026-02-09 03:31:52.463418 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:31:52.463424 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:31:52.463429 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:31:52.463447 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:31:52.463453 | orchestrator | 2026-02-09 03:31:52.463460 | orchestrator | 2026-02-09 03:31:52.463467 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:31:52.463474 | orchestrator | Monday 09 February 2026 03:31:52 +0000 (0:00:00.643) 0:00:08.281 ******* 2026-02-09 03:31:52.463481 | orchestrator | =============================================================================== 2026-02-09 03:31:52.463488 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.14s 2026-02-09 03:31:52.463495 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2026-02-09 03:31:52.463501 | orchestrator | Check device availability ----------------------------------------------- 1.22s 2026-02-09 03:31:52.463507 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2026-02-09 03:31:52.463514 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2026-02-09 03:31:52.463521 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.62s 2026-02-09 03:31:52.463528 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-02-09 03:31:52.463535 | orchestrator | Remove all rook related logical devices --------------------------------- 0.42s 2026-02-09 03:31:52.463542 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2026-02-09 03:32:04.913271 | orchestrator | 2026-02-09 03:32:04 | INFO  | Task b1d8f493-f71b-4b4c-91d6-2dac00ee7616 (facts) was prepared for execution. 2026-02-09 03:32:04.913353 | orchestrator | 2026-02-09 03:32:04 | INFO  | It takes a moment until task b1d8f493-f71b-4b4c-91d6-2dac00ee7616 (facts) has been started and output is visible here. 2026-02-09 03:32:18.192902 | orchestrator | 2026-02-09 03:32:18.193048 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-09 03:32:18.193077 | orchestrator | 2026-02-09 03:32:18.193098 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-09 03:32:18.193117 | orchestrator | Monday 09 February 2026 03:32:09 +0000 (0:00:00.272) 0:00:00.272 ******* 2026-02-09 03:32:18.193235 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:32:18.193251 | orchestrator | ok: [testbed-manager] 2026-02-09 03:32:18.193262 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:32:18.193272 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:32:18.193283 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:32:18.193293 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:32:18.193304 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:32:18.193315 | orchestrator | 2026-02-09 03:32:18.193326 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-09 03:32:18.193338 | orchestrator | Monday 09 February 2026 03:32:10 +0000 (0:00:01.146) 0:00:01.418 ******* 2026-02-09 03:32:18.193349 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:32:18.193361 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:32:18.193372 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:32:18.193382 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:32:18.193393 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:18.193407 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:18.193420 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:32:18.193433 | orchestrator | 2026-02-09 03:32:18.193446 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-09 03:32:18.193459 | orchestrator | 2026-02-09 03:32:18.193473 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-09 03:32:18.193486 | orchestrator | Monday 09 February 2026 03:32:11 +0000 (0:00:01.294) 0:00:02.713 ******* 2026-02-09 03:32:18.193499 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:32:18.193511 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:32:18.193524 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:32:18.193537 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:32:18.193549 | orchestrator | ok: [testbed-manager] 2026-02-09 03:32:18.193563 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:32:18.193575 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:32:18.193588 | orchestrator | 2026-02-09 03:32:18.193600 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-09 03:32:18.193613 | orchestrator | 2026-02-09 03:32:18.193626 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-09 03:32:18.193639 | orchestrator | Monday 09 February 2026 03:32:17 +0000 (0:00:05.111) 0:00:07.824 ******* 2026-02-09 03:32:18.193651 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:32:18.193662 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:32:18.193672 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:32:18.193683 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:32:18.193694 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:18.193704 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:18.193715 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:32:18.193725 | orchestrator | 2026-02-09 03:32:18.193736 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:32:18.193747 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:32:18.193845 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:32:18.193866 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:32:18.193877 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:32:18.193888 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:32:18.193899 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:32:18.193921 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:32:18.193932 | orchestrator | 2026-02-09 03:32:18.193943 | orchestrator | 2026-02-09 03:32:18.193954 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:32:18.193965 | orchestrator | Monday 09 February 2026 03:32:17 +0000 (0:00:00.620) 0:00:08.445 ******* 2026-02-09 03:32:18.193976 | orchestrator | =============================================================================== 2026-02-09 03:32:18.193986 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.11s 2026-02-09 03:32:18.193997 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.29s 2026-02-09 03:32:18.194008 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.15s 2026-02-09 03:32:18.194098 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-02-09 03:32:20.746282 | orchestrator | 2026-02-09 03:32:20 | INFO  | Task 8bf2d508-1663-4b71-a09b-e4a7945cc8c7 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-09 03:32:20.746390 | orchestrator | 2026-02-09 03:32:20 | INFO  | It takes a moment until task 8bf2d508-1663-4b71-a09b-e4a7945cc8c7 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-09 03:32:33.835318 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-09 03:32:33.835419 | orchestrator | 2.16.14 2026-02-09 03:32:33.835432 | orchestrator | 2026-02-09 03:32:33.835441 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-09 03:32:33.835479 | orchestrator | 2026-02-09 03:32:33.835488 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-09 03:32:33.835496 | orchestrator | Monday 09 February 2026 03:32:25 +0000 (0:00:00.399) 0:00:00.399 ******* 2026-02-09 03:32:33.835504 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-09 03:32:33.835512 | orchestrator | 2026-02-09 03:32:33.835531 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-09 03:32:33.835538 | orchestrator | Monday 09 February 2026 03:32:25 +0000 (0:00:00.290) 0:00:00.690 ******* 2026-02-09 03:32:33.835545 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:32:33.835552 | orchestrator | 2026-02-09 03:32:33.835559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.835566 | orchestrator | Monday 09 February 2026 03:32:26 +0000 (0:00:00.259) 0:00:00.950 ******* 2026-02-09 03:32:33.835572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-09 03:32:33.835579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-09 03:32:33.835586 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-09 03:32:33.835593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-09 03:32:33.835599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-09 03:32:33.835606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-09 03:32:33.835612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-09 03:32:33.835619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-09 03:32:33.835626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-09 03:32:33.835632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-09 03:32:33.835639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-09 03:32:33.835646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-09 03:32:33.835672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-09 03:32:33.835679 | orchestrator | 2026-02-09 03:32:33.835686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.835693 | orchestrator | Monday 09 February 2026 03:32:26 +0000 (0:00:00.538) 0:00:01.488 ******* 2026-02-09 03:32:33.835699 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.835707 | orchestrator | 2026-02-09 03:32:33.835713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.835720 | orchestrator | Monday 09 February 2026 03:32:26 +0000 (0:00:00.224) 0:00:01.713 ******* 2026-02-09 03:32:33.835727 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.835733 | orchestrator | 2026-02-09 03:32:33.835740 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.835746 | orchestrator | Monday 09 February 2026 03:32:27 +0000 (0:00:00.227) 0:00:01.940 ******* 2026-02-09 03:32:33.835753 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.835760 | orchestrator | 2026-02-09 03:32:33.835766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.835773 | orchestrator | Monday 09 February 2026 03:32:27 +0000 (0:00:00.239) 0:00:02.180 ******* 2026-02-09 03:32:33.835780 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.835786 | orchestrator | 2026-02-09 03:32:33.835793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.835800 | orchestrator | Monday 09 February 2026 03:32:27 +0000 (0:00:00.206) 0:00:02.386 ******* 2026-02-09 03:32:33.835806 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.835813 | orchestrator | 2026-02-09 03:32:33.835819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.835826 | orchestrator | Monday 09 February 2026 03:32:27 +0000 (0:00:00.219) 0:00:02.605 ******* 2026-02-09 03:32:33.835833 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.835839 | orchestrator | 2026-02-09 03:32:33.835846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.835852 | orchestrator | Monday 09 February 2026 03:32:28 +0000 (0:00:00.276) 0:00:02.882 ******* 2026-02-09 03:32:33.835859 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.835866 | orchestrator | 2026-02-09 03:32:33.835873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.835881 | orchestrator | Monday 09 February 2026 03:32:28 +0000 (0:00:00.240) 0:00:03.123 ******* 2026-02-09 03:32:33.835889 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.835897 | orchestrator | 2026-02-09 03:32:33.835904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.835912 | orchestrator | Monday 09 February 2026 03:32:28 +0000 (0:00:00.207) 0:00:03.330 ******* 2026-02-09 03:32:33.835919 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8) 2026-02-09 03:32:33.835929 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8) 2026-02-09 03:32:33.835937 | orchestrator | 2026-02-09 03:32:33.835945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.835966 | orchestrator | Monday 09 February 2026 03:32:29 +0000 (0:00:00.456) 0:00:03.787 ******* 2026-02-09 03:32:33.835975 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d) 2026-02-09 03:32:33.835983 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d) 2026-02-09 03:32:33.835990 | orchestrator | 2026-02-09 03:32:33.835998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.836006 | orchestrator | Monday 09 February 2026 03:32:29 +0000 (0:00:00.725) 0:00:04.513 ******* 2026-02-09 03:32:33.836021 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4) 2026-02-09 03:32:33.836041 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4) 2026-02-09 03:32:33.836053 | orchestrator | 2026-02-09 03:32:33.836064 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.836074 | orchestrator | Monday 09 February 2026 03:32:30 +0000 (0:00:00.751) 0:00:05.264 ******* 2026-02-09 03:32:33.836085 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa) 2026-02-09 03:32:33.836097 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa) 2026-02-09 03:32:33.836107 | orchestrator | 2026-02-09 03:32:33.836173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:33.836186 | orchestrator | Monday 09 February 2026 03:32:31 +0000 (0:00:00.940) 0:00:06.204 ******* 2026-02-09 03:32:33.836197 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-09 03:32:33.836207 | orchestrator | 2026-02-09 03:32:33.836218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:33.836229 | orchestrator | Monday 09 February 2026 03:32:31 +0000 (0:00:00.349) 0:00:06.554 ******* 2026-02-09 03:32:33.836241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-09 03:32:33.836249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-09 03:32:33.836256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-09 03:32:33.836262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-09 03:32:33.836269 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-09 03:32:33.836275 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-09 03:32:33.836282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-09 03:32:33.836289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-09 03:32:33.836295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-09 03:32:33.836302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-09 03:32:33.836309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-09 03:32:33.836315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-09 03:32:33.836322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-09 03:32:33.836328 | orchestrator | 2026-02-09 03:32:33.836335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:33.836342 | orchestrator | Monday 09 February 2026 03:32:32 +0000 (0:00:00.428) 0:00:06.982 ******* 2026-02-09 03:32:33.836348 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.836355 | orchestrator | 2026-02-09 03:32:33.836362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:33.836368 | orchestrator | Monday 09 February 2026 03:32:32 +0000 (0:00:00.216) 0:00:07.199 ******* 2026-02-09 03:32:33.836375 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.836381 | orchestrator | 2026-02-09 03:32:33.836388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:33.836395 | orchestrator | Monday 09 February 2026 03:32:32 +0000 (0:00:00.249) 0:00:07.449 ******* 2026-02-09 03:32:33.836401 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.836408 | orchestrator | 2026-02-09 03:32:33.836415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:33.836421 | orchestrator | Monday 09 February 2026 03:32:32 +0000 (0:00:00.243) 0:00:07.692 ******* 2026-02-09 03:32:33.836434 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.836441 | orchestrator | 2026-02-09 03:32:33.836448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:33.836455 | orchestrator | Monday 09 February 2026 03:32:33 +0000 (0:00:00.233) 0:00:07.926 ******* 2026-02-09 03:32:33.836461 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.836468 | orchestrator | 2026-02-09 03:32:33.836475 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:33.836481 | orchestrator | Monday 09 February 2026 03:32:33 +0000 (0:00:00.224) 0:00:08.151 ******* 2026-02-09 03:32:33.836488 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.836494 | orchestrator | 2026-02-09 03:32:33.836501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:33.836508 | orchestrator | Monday 09 February 2026 03:32:33 +0000 (0:00:00.203) 0:00:08.355 ******* 2026-02-09 03:32:33.836514 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:33.836521 | orchestrator | 2026-02-09 03:32:33.836534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:42.361522 | orchestrator | Monday 09 February 2026 03:32:33 +0000 (0:00:00.213) 0:00:08.569 ******* 2026-02-09 03:32:42.361636 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.361653 | orchestrator | 2026-02-09 03:32:42.361666 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:42.361678 | orchestrator | Monday 09 February 2026 03:32:34 +0000 (0:00:00.212) 0:00:08.781 ******* 2026-02-09 03:32:42.361693 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-09 03:32:42.361712 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-09 03:32:42.361732 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-09 03:32:42.361770 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-09 03:32:42.361791 | orchestrator | 2026-02-09 03:32:42.361810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:42.361822 | orchestrator | Monday 09 February 2026 03:32:35 +0000 (0:00:01.135) 0:00:09.917 ******* 2026-02-09 03:32:42.361833 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.361843 | orchestrator | 2026-02-09 03:32:42.361855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:42.361865 | orchestrator | Monday 09 February 2026 03:32:35 +0000 (0:00:00.261) 0:00:10.178 ******* 2026-02-09 03:32:42.361880 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.361898 | orchestrator | 2026-02-09 03:32:42.361917 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:42.361936 | orchestrator | Monday 09 February 2026 03:32:35 +0000 (0:00:00.221) 0:00:10.399 ******* 2026-02-09 03:32:42.361956 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.361968 | orchestrator | 2026-02-09 03:32:42.361979 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:42.361989 | orchestrator | Monday 09 February 2026 03:32:35 +0000 (0:00:00.224) 0:00:10.624 ******* 2026-02-09 03:32:42.362000 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.362092 | orchestrator | 2026-02-09 03:32:42.362137 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-09 03:32:42.362152 | orchestrator | Monday 09 February 2026 03:32:36 +0000 (0:00:00.202) 0:00:10.826 ******* 2026-02-09 03:32:42.362165 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-09 03:32:42.362179 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-09 03:32:42.362200 | orchestrator | 2026-02-09 03:32:42.362221 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-09 03:32:42.362241 | orchestrator | Monday 09 February 2026 03:32:36 +0000 (0:00:00.214) 0:00:11.040 ******* 2026-02-09 03:32:42.362256 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.362275 | orchestrator | 2026-02-09 03:32:42.362295 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-09 03:32:42.362314 | orchestrator | Monday 09 February 2026 03:32:36 +0000 (0:00:00.168) 0:00:11.208 ******* 2026-02-09 03:32:42.362363 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.362382 | orchestrator | 2026-02-09 03:32:42.362400 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-09 03:32:42.362417 | orchestrator | Monday 09 February 2026 03:32:36 +0000 (0:00:00.170) 0:00:11.379 ******* 2026-02-09 03:32:42.362435 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.362452 | orchestrator | 2026-02-09 03:32:42.362469 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-09 03:32:42.362485 | orchestrator | Monday 09 February 2026 03:32:36 +0000 (0:00:00.154) 0:00:11.534 ******* 2026-02-09 03:32:42.362502 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:32:42.362520 | orchestrator | 2026-02-09 03:32:42.362539 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-09 03:32:42.362558 | orchestrator | Monday 09 February 2026 03:32:36 +0000 (0:00:00.162) 0:00:11.697 ******* 2026-02-09 03:32:42.362577 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '709cc28b-6adb-555a-83e9-344e81441f7b'}}) 2026-02-09 03:32:42.362589 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '244f969e-c6c5-5568-af21-d52fe589178d'}}) 2026-02-09 03:32:42.362600 | orchestrator | 2026-02-09 03:32:42.362611 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-09 03:32:42.362621 | orchestrator | Monday 09 February 2026 03:32:37 +0000 (0:00:00.206) 0:00:11.904 ******* 2026-02-09 03:32:42.362633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '709cc28b-6adb-555a-83e9-344e81441f7b'}})  2026-02-09 03:32:42.362645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '244f969e-c6c5-5568-af21-d52fe589178d'}})  2026-02-09 03:32:42.362656 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.362667 | orchestrator | 2026-02-09 03:32:42.362677 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-09 03:32:42.362688 | orchestrator | Monday 09 February 2026 03:32:37 +0000 (0:00:00.392) 0:00:12.296 ******* 2026-02-09 03:32:42.362699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '709cc28b-6adb-555a-83e9-344e81441f7b'}})  2026-02-09 03:32:42.362710 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '244f969e-c6c5-5568-af21-d52fe589178d'}})  2026-02-09 03:32:42.362720 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.362731 | orchestrator | 2026-02-09 03:32:42.362742 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-09 03:32:42.362752 | orchestrator | Monday 09 February 2026 03:32:37 +0000 (0:00:00.160) 0:00:12.457 ******* 2026-02-09 03:32:42.362763 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '709cc28b-6adb-555a-83e9-344e81441f7b'}})  2026-02-09 03:32:42.362796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '244f969e-c6c5-5568-af21-d52fe589178d'}})  2026-02-09 03:32:42.362808 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.362819 | orchestrator | 2026-02-09 03:32:42.362830 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-09 03:32:42.362841 | orchestrator | Monday 09 February 2026 03:32:37 +0000 (0:00:00.167) 0:00:12.625 ******* 2026-02-09 03:32:42.362852 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:32:42.362862 | orchestrator | 2026-02-09 03:32:42.362873 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-09 03:32:42.362907 | orchestrator | Monday 09 February 2026 03:32:38 +0000 (0:00:00.169) 0:00:12.795 ******* 2026-02-09 03:32:42.362919 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:32:42.362930 | orchestrator | 2026-02-09 03:32:42.362941 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-09 03:32:42.362956 | orchestrator | Monday 09 February 2026 03:32:38 +0000 (0:00:00.154) 0:00:12.949 ******* 2026-02-09 03:32:42.362987 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.362998 | orchestrator | 2026-02-09 03:32:42.363009 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-09 03:32:42.363020 | orchestrator | Monday 09 February 2026 03:32:38 +0000 (0:00:00.144) 0:00:13.093 ******* 2026-02-09 03:32:42.363031 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.363041 | orchestrator | 2026-02-09 03:32:42.363052 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-09 03:32:42.363063 | orchestrator | Monday 09 February 2026 03:32:38 +0000 (0:00:00.162) 0:00:13.256 ******* 2026-02-09 03:32:42.363073 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.363084 | orchestrator | 2026-02-09 03:32:42.363095 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-09 03:32:42.363128 | orchestrator | Monday 09 February 2026 03:32:38 +0000 (0:00:00.148) 0:00:13.405 ******* 2026-02-09 03:32:42.363142 | orchestrator | ok: [testbed-node-3] => { 2026-02-09 03:32:42.363153 | orchestrator |  "ceph_osd_devices": { 2026-02-09 03:32:42.363163 | orchestrator |  "sdb": { 2026-02-09 03:32:42.363174 | orchestrator |  "osd_lvm_uuid": "709cc28b-6adb-555a-83e9-344e81441f7b" 2026-02-09 03:32:42.363185 | orchestrator |  }, 2026-02-09 03:32:42.363196 | orchestrator |  "sdc": { 2026-02-09 03:32:42.363206 | orchestrator |  "osd_lvm_uuid": "244f969e-c6c5-5568-af21-d52fe589178d" 2026-02-09 03:32:42.363217 | orchestrator |  } 2026-02-09 03:32:42.363227 | orchestrator |  } 2026-02-09 03:32:42.363238 | orchestrator | } 2026-02-09 03:32:42.363249 | orchestrator | 2026-02-09 03:32:42.363260 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-09 03:32:42.363271 | orchestrator | Monday 09 February 2026 03:32:38 +0000 (0:00:00.160) 0:00:13.565 ******* 2026-02-09 03:32:42.363281 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.363292 | orchestrator | 2026-02-09 03:32:42.363303 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-09 03:32:42.363313 | orchestrator | Monday 09 February 2026 03:32:38 +0000 (0:00:00.164) 0:00:13.730 ******* 2026-02-09 03:32:42.363324 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.363335 | orchestrator | 2026-02-09 03:32:42.363345 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-09 03:32:42.363356 | orchestrator | Monday 09 February 2026 03:32:39 +0000 (0:00:00.149) 0:00:13.879 ******* 2026-02-09 03:32:42.363366 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:32:42.363377 | orchestrator | 2026-02-09 03:32:42.363388 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-09 03:32:42.363398 | orchestrator | Monday 09 February 2026 03:32:39 +0000 (0:00:00.157) 0:00:14.037 ******* 2026-02-09 03:32:42.363409 | orchestrator | changed: [testbed-node-3] => { 2026-02-09 03:32:42.363419 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-09 03:32:42.363430 | orchestrator |  "ceph_osd_devices": { 2026-02-09 03:32:42.363441 | orchestrator |  "sdb": { 2026-02-09 03:32:42.363451 | orchestrator |  "osd_lvm_uuid": "709cc28b-6adb-555a-83e9-344e81441f7b" 2026-02-09 03:32:42.363462 | orchestrator |  }, 2026-02-09 03:32:42.363473 | orchestrator |  "sdc": { 2026-02-09 03:32:42.363484 | orchestrator |  "osd_lvm_uuid": "244f969e-c6c5-5568-af21-d52fe589178d" 2026-02-09 03:32:42.363495 | orchestrator |  } 2026-02-09 03:32:42.363505 | orchestrator |  }, 2026-02-09 03:32:42.363516 | orchestrator |  "lvm_volumes": [ 2026-02-09 03:32:42.363527 | orchestrator |  { 2026-02-09 03:32:42.363537 | orchestrator |  "data": "osd-block-709cc28b-6adb-555a-83e9-344e81441f7b", 2026-02-09 03:32:42.363548 | orchestrator |  "data_vg": "ceph-709cc28b-6adb-555a-83e9-344e81441f7b" 2026-02-09 03:32:42.363562 | orchestrator |  }, 2026-02-09 03:32:42.363579 | orchestrator |  { 2026-02-09 03:32:42.363599 | orchestrator |  "data": "osd-block-244f969e-c6c5-5568-af21-d52fe589178d", 2026-02-09 03:32:42.363628 | orchestrator |  "data_vg": "ceph-244f969e-c6c5-5568-af21-d52fe589178d" 2026-02-09 03:32:42.363646 | orchestrator |  } 2026-02-09 03:32:42.363663 | orchestrator |  ] 2026-02-09 03:32:42.363680 | orchestrator |  } 2026-02-09 03:32:42.363697 | orchestrator | } 2026-02-09 03:32:42.363715 | orchestrator | 2026-02-09 03:32:42.363734 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-09 03:32:42.363752 | orchestrator | Monday 09 February 2026 03:32:39 +0000 (0:00:00.547) 0:00:14.584 ******* 2026-02-09 03:32:42.363769 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-09 03:32:42.363789 | orchestrator | 2026-02-09 03:32:42.363807 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-09 03:32:42.363826 | orchestrator | 2026-02-09 03:32:42.363845 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-09 03:32:42.363862 | orchestrator | Monday 09 February 2026 03:32:41 +0000 (0:00:01.955) 0:00:16.539 ******* 2026-02-09 03:32:42.363880 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-09 03:32:42.363898 | orchestrator | 2026-02-09 03:32:42.363915 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-09 03:32:42.363926 | orchestrator | Monday 09 February 2026 03:32:42 +0000 (0:00:00.280) 0:00:16.820 ******* 2026-02-09 03:32:42.363937 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:32:42.363948 | orchestrator | 2026-02-09 03:32:42.363970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.433561 | orchestrator | Monday 09 February 2026 03:32:42 +0000 (0:00:00.280) 0:00:17.101 ******* 2026-02-09 03:32:51.433649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-09 03:32:51.433659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-09 03:32:51.433667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-09 03:32:51.433687 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-09 03:32:51.433694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-09 03:32:51.433701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-09 03:32:51.433708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-09 03:32:51.433715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-09 03:32:51.433722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-09 03:32:51.433729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-09 03:32:51.433736 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-09 03:32:51.433743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-09 03:32:51.433750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-09 03:32:51.433757 | orchestrator | 2026-02-09 03:32:51.433765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.433771 | orchestrator | Monday 09 February 2026 03:32:42 +0000 (0:00:00.456) 0:00:17.557 ******* 2026-02-09 03:32:51.433778 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.433786 | orchestrator | 2026-02-09 03:32:51.433793 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.433800 | orchestrator | Monday 09 February 2026 03:32:43 +0000 (0:00:00.229) 0:00:17.787 ******* 2026-02-09 03:32:51.433806 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.433830 | orchestrator | 2026-02-09 03:32:51.433844 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.433870 | orchestrator | Monday 09 February 2026 03:32:43 +0000 (0:00:00.206) 0:00:17.994 ******* 2026-02-09 03:32:51.433907 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.433918 | orchestrator | 2026-02-09 03:32:51.433928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.433938 | orchestrator | Monday 09 February 2026 03:32:43 +0000 (0:00:00.204) 0:00:18.198 ******* 2026-02-09 03:32:51.433947 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.433957 | orchestrator | 2026-02-09 03:32:51.433966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.433976 | orchestrator | Monday 09 February 2026 03:32:44 +0000 (0:00:00.712) 0:00:18.911 ******* 2026-02-09 03:32:51.433985 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.433996 | orchestrator | 2026-02-09 03:32:51.434006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.434066 | orchestrator | Monday 09 February 2026 03:32:44 +0000 (0:00:00.238) 0:00:19.149 ******* 2026-02-09 03:32:51.434079 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.434091 | orchestrator | 2026-02-09 03:32:51.434127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.434140 | orchestrator | Monday 09 February 2026 03:32:44 +0000 (0:00:00.239) 0:00:19.389 ******* 2026-02-09 03:32:51.434152 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.434164 | orchestrator | 2026-02-09 03:32:51.434176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.434189 | orchestrator | Monday 09 February 2026 03:32:44 +0000 (0:00:00.220) 0:00:19.609 ******* 2026-02-09 03:32:51.434201 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.434212 | orchestrator | 2026-02-09 03:32:51.434224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.434236 | orchestrator | Monday 09 February 2026 03:32:45 +0000 (0:00:00.240) 0:00:19.849 ******* 2026-02-09 03:32:51.434248 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd) 2026-02-09 03:32:51.434262 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd) 2026-02-09 03:32:51.434275 | orchestrator | 2026-02-09 03:32:51.434287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.434299 | orchestrator | Monday 09 February 2026 03:32:45 +0000 (0:00:00.498) 0:00:20.348 ******* 2026-02-09 03:32:51.434311 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509) 2026-02-09 03:32:51.434323 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509) 2026-02-09 03:32:51.434334 | orchestrator | 2026-02-09 03:32:51.434347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.434358 | orchestrator | Monday 09 February 2026 03:32:46 +0000 (0:00:00.516) 0:00:20.865 ******* 2026-02-09 03:32:51.434370 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c) 2026-02-09 03:32:51.434382 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c) 2026-02-09 03:32:51.434394 | orchestrator | 2026-02-09 03:32:51.434404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.434437 | orchestrator | Monday 09 February 2026 03:32:46 +0000 (0:00:00.520) 0:00:21.385 ******* 2026-02-09 03:32:51.434449 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24) 2026-02-09 03:32:51.434460 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24) 2026-02-09 03:32:51.434470 | orchestrator | 2026-02-09 03:32:51.434482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:51.434500 | orchestrator | Monday 09 February 2026 03:32:47 +0000 (0:00:00.453) 0:00:21.839 ******* 2026-02-09 03:32:51.434512 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-09 03:32:51.434537 | orchestrator | 2026-02-09 03:32:51.434549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:51.434560 | orchestrator | Monday 09 February 2026 03:32:47 +0000 (0:00:00.382) 0:00:22.222 ******* 2026-02-09 03:32:51.434570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-09 03:32:51.434582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-09 03:32:51.434593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-09 03:32:51.434604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-09 03:32:51.434616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-09 03:32:51.434625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-09 03:32:51.434635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-09 03:32:51.434645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-09 03:32:51.434655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-09 03:32:51.434665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-09 03:32:51.434676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-09 03:32:51.434685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-09 03:32:51.434695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-09 03:32:51.434705 | orchestrator | 2026-02-09 03:32:51.434716 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:51.434728 | orchestrator | Monday 09 February 2026 03:32:47 +0000 (0:00:00.411) 0:00:22.633 ******* 2026-02-09 03:32:51.434738 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.434749 | orchestrator | 2026-02-09 03:32:51.434760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:51.434771 | orchestrator | Monday 09 February 2026 03:32:48 +0000 (0:00:00.709) 0:00:23.343 ******* 2026-02-09 03:32:51.434784 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.434794 | orchestrator | 2026-02-09 03:32:51.434805 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:51.434817 | orchestrator | Monday 09 February 2026 03:32:48 +0000 (0:00:00.228) 0:00:23.571 ******* 2026-02-09 03:32:51.434829 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.434839 | orchestrator | 2026-02-09 03:32:51.434850 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:51.434861 | orchestrator | Monday 09 February 2026 03:32:49 +0000 (0:00:00.227) 0:00:23.798 ******* 2026-02-09 03:32:51.434871 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.434882 | orchestrator | 2026-02-09 03:32:51.434892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:51.434903 | orchestrator | Monday 09 February 2026 03:32:49 +0000 (0:00:00.245) 0:00:24.044 ******* 2026-02-09 03:32:51.434913 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.434923 | orchestrator | 2026-02-09 03:32:51.434933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:51.434943 | orchestrator | Monday 09 February 2026 03:32:49 +0000 (0:00:00.224) 0:00:24.268 ******* 2026-02-09 03:32:51.434953 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.434964 | orchestrator | 2026-02-09 03:32:51.434975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:51.434985 | orchestrator | Monday 09 February 2026 03:32:49 +0000 (0:00:00.249) 0:00:24.518 ******* 2026-02-09 03:32:51.434996 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.435016 | orchestrator | 2026-02-09 03:32:51.435028 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:51.435040 | orchestrator | Monday 09 February 2026 03:32:50 +0000 (0:00:00.270) 0:00:24.788 ******* 2026-02-09 03:32:51.435051 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:51.435062 | orchestrator | 2026-02-09 03:32:51.435072 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:51.435083 | orchestrator | Monday 09 February 2026 03:32:50 +0000 (0:00:00.228) 0:00:25.017 ******* 2026-02-09 03:32:51.435094 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-09 03:32:51.435176 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-09 03:32:51.435188 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-09 03:32:51.435198 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-09 03:32:51.435209 | orchestrator | 2026-02-09 03:32:51.435220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:51.435231 | orchestrator | Monday 09 February 2026 03:32:51 +0000 (0:00:00.936) 0:00:25.953 ******* 2026-02-09 03:32:51.435242 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.924731 | orchestrator | 2026-02-09 03:32:58.924826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:58.924839 | orchestrator | Monday 09 February 2026 03:32:51 +0000 (0:00:00.220) 0:00:26.173 ******* 2026-02-09 03:32:58.924847 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.924855 | orchestrator | 2026-02-09 03:32:58.924863 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:58.924871 | orchestrator | Monday 09 February 2026 03:32:51 +0000 (0:00:00.249) 0:00:26.423 ******* 2026-02-09 03:32:58.924893 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.924901 | orchestrator | 2026-02-09 03:32:58.924908 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:32:58.924915 | orchestrator | Monday 09 February 2026 03:32:52 +0000 (0:00:00.877) 0:00:27.300 ******* 2026-02-09 03:32:58.924923 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.924930 | orchestrator | 2026-02-09 03:32:58.924937 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-09 03:32:58.924945 | orchestrator | Monday 09 February 2026 03:32:52 +0000 (0:00:00.240) 0:00:27.540 ******* 2026-02-09 03:32:58.924952 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-09 03:32:58.924959 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-09 03:32:58.924967 | orchestrator | 2026-02-09 03:32:58.924974 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-09 03:32:58.924982 | orchestrator | Monday 09 February 2026 03:32:53 +0000 (0:00:00.237) 0:00:27.778 ******* 2026-02-09 03:32:58.924989 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.924996 | orchestrator | 2026-02-09 03:32:58.925003 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-09 03:32:58.925010 | orchestrator | Monday 09 February 2026 03:32:53 +0000 (0:00:00.152) 0:00:27.930 ******* 2026-02-09 03:32:58.925017 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.925025 | orchestrator | 2026-02-09 03:32:58.925032 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-09 03:32:58.925039 | orchestrator | Monday 09 February 2026 03:32:53 +0000 (0:00:00.167) 0:00:28.098 ******* 2026-02-09 03:32:58.925046 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.925053 | orchestrator | 2026-02-09 03:32:58.925061 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-09 03:32:58.925068 | orchestrator | Monday 09 February 2026 03:32:53 +0000 (0:00:00.151) 0:00:28.249 ******* 2026-02-09 03:32:58.925075 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:32:58.925083 | orchestrator | 2026-02-09 03:32:58.925090 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-09 03:32:58.925122 | orchestrator | Monday 09 February 2026 03:32:53 +0000 (0:00:00.158) 0:00:28.408 ******* 2026-02-09 03:32:58.925147 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2c0211a0-e551-5710-9a38-56737a7f5fb3'}}) 2026-02-09 03:32:58.925155 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}}) 2026-02-09 03:32:58.925163 | orchestrator | 2026-02-09 03:32:58.925170 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-09 03:32:58.925177 | orchestrator | Monday 09 February 2026 03:32:53 +0000 (0:00:00.186) 0:00:28.594 ******* 2026-02-09 03:32:58.925185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2c0211a0-e551-5710-9a38-56737a7f5fb3'}})  2026-02-09 03:32:58.925194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}})  2026-02-09 03:32:58.925201 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.925208 | orchestrator | 2026-02-09 03:32:58.925216 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-09 03:32:58.925223 | orchestrator | Monday 09 February 2026 03:32:54 +0000 (0:00:00.175) 0:00:28.770 ******* 2026-02-09 03:32:58.925230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2c0211a0-e551-5710-9a38-56737a7f5fb3'}})  2026-02-09 03:32:58.925237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}})  2026-02-09 03:32:58.925245 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.925252 | orchestrator | 2026-02-09 03:32:58.925259 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-09 03:32:58.925266 | orchestrator | Monday 09 February 2026 03:32:54 +0000 (0:00:00.180) 0:00:28.951 ******* 2026-02-09 03:32:58.925273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2c0211a0-e551-5710-9a38-56737a7f5fb3'}})  2026-02-09 03:32:58.925282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}})  2026-02-09 03:32:58.925290 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.925298 | orchestrator | 2026-02-09 03:32:58.925306 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-09 03:32:58.925314 | orchestrator | Monday 09 February 2026 03:32:54 +0000 (0:00:00.161) 0:00:29.113 ******* 2026-02-09 03:32:58.925323 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:32:58.925331 | orchestrator | 2026-02-09 03:32:58.925339 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-09 03:32:58.925348 | orchestrator | Monday 09 February 2026 03:32:54 +0000 (0:00:00.147) 0:00:29.260 ******* 2026-02-09 03:32:58.925356 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:32:58.925364 | orchestrator | 2026-02-09 03:32:58.925373 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-09 03:32:58.925381 | orchestrator | Monday 09 February 2026 03:32:54 +0000 (0:00:00.160) 0:00:29.420 ******* 2026-02-09 03:32:58.925404 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.925412 | orchestrator | 2026-02-09 03:32:58.925421 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-09 03:32:58.925429 | orchestrator | Monday 09 February 2026 03:32:55 +0000 (0:00:00.396) 0:00:29.816 ******* 2026-02-09 03:32:58.925437 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.925446 | orchestrator | 2026-02-09 03:32:58.925454 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-09 03:32:58.925462 | orchestrator | Monday 09 February 2026 03:32:55 +0000 (0:00:00.160) 0:00:29.976 ******* 2026-02-09 03:32:58.925475 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.925483 | orchestrator | 2026-02-09 03:32:58.925492 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-09 03:32:58.925500 | orchestrator | Monday 09 February 2026 03:32:55 +0000 (0:00:00.143) 0:00:30.120 ******* 2026-02-09 03:32:58.925514 | orchestrator | ok: [testbed-node-4] => { 2026-02-09 03:32:58.925523 | orchestrator |  "ceph_osd_devices": { 2026-02-09 03:32:58.925592 | orchestrator |  "sdb": { 2026-02-09 03:32:58.925603 | orchestrator |  "osd_lvm_uuid": "2c0211a0-e551-5710-9a38-56737a7f5fb3" 2026-02-09 03:32:58.925611 | orchestrator |  }, 2026-02-09 03:32:58.925620 | orchestrator |  "sdc": { 2026-02-09 03:32:58.925629 | orchestrator |  "osd_lvm_uuid": "84c19404-a9f4-50a5-b230-c81d6fb6b3c9" 2026-02-09 03:32:58.925637 | orchestrator |  } 2026-02-09 03:32:58.925646 | orchestrator |  } 2026-02-09 03:32:58.925653 | orchestrator | } 2026-02-09 03:32:58.925661 | orchestrator | 2026-02-09 03:32:58.925668 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-09 03:32:58.925676 | orchestrator | Monday 09 February 2026 03:32:55 +0000 (0:00:00.145) 0:00:30.265 ******* 2026-02-09 03:32:58.925683 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.925690 | orchestrator | 2026-02-09 03:32:58.925697 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-09 03:32:58.925705 | orchestrator | Monday 09 February 2026 03:32:55 +0000 (0:00:00.164) 0:00:30.430 ******* 2026-02-09 03:32:58.925712 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.925719 | orchestrator | 2026-02-09 03:32:58.925726 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-09 03:32:58.925733 | orchestrator | Monday 09 February 2026 03:32:55 +0000 (0:00:00.155) 0:00:30.586 ******* 2026-02-09 03:32:58.925740 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:32:58.925748 | orchestrator | 2026-02-09 03:32:58.925755 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-09 03:32:58.925762 | orchestrator | Monday 09 February 2026 03:32:55 +0000 (0:00:00.157) 0:00:30.743 ******* 2026-02-09 03:32:58.925769 | orchestrator | changed: [testbed-node-4] => { 2026-02-09 03:32:58.925777 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-09 03:32:58.925784 | orchestrator |  "ceph_osd_devices": { 2026-02-09 03:32:58.925791 | orchestrator |  "sdb": { 2026-02-09 03:32:58.925813 | orchestrator |  "osd_lvm_uuid": "2c0211a0-e551-5710-9a38-56737a7f5fb3" 2026-02-09 03:32:58.925821 | orchestrator |  }, 2026-02-09 03:32:58.925828 | orchestrator |  "sdc": { 2026-02-09 03:32:58.925836 | orchestrator |  "osd_lvm_uuid": "84c19404-a9f4-50a5-b230-c81d6fb6b3c9" 2026-02-09 03:32:58.925843 | orchestrator |  } 2026-02-09 03:32:58.925850 | orchestrator |  }, 2026-02-09 03:32:58.925857 | orchestrator |  "lvm_volumes": [ 2026-02-09 03:32:58.925864 | orchestrator |  { 2026-02-09 03:32:58.925871 | orchestrator |  "data": "osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3", 2026-02-09 03:32:58.925879 | orchestrator |  "data_vg": "ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3" 2026-02-09 03:32:58.925886 | orchestrator |  }, 2026-02-09 03:32:58.925893 | orchestrator |  { 2026-02-09 03:32:58.925900 | orchestrator |  "data": "osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9", 2026-02-09 03:32:58.925908 | orchestrator |  "data_vg": "ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9" 2026-02-09 03:32:58.925925 | orchestrator |  } 2026-02-09 03:32:58.925932 | orchestrator |  ] 2026-02-09 03:32:58.925940 | orchestrator |  } 2026-02-09 03:32:58.925947 | orchestrator | } 2026-02-09 03:32:58.925954 | orchestrator | 2026-02-09 03:32:58.925962 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-09 03:32:58.925969 | orchestrator | Monday 09 February 2026 03:32:56 +0000 (0:00:00.243) 0:00:30.987 ******* 2026-02-09 03:32:58.925976 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-09 03:32:58.925983 | orchestrator | 2026-02-09 03:32:58.925990 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-09 03:32:58.925998 | orchestrator | 2026-02-09 03:32:58.926005 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-09 03:32:58.926091 | orchestrator | Monday 09 February 2026 03:32:57 +0000 (0:00:01.610) 0:00:32.597 ******* 2026-02-09 03:32:58.926135 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-09 03:32:58.926142 | orchestrator | 2026-02-09 03:32:58.926149 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-09 03:32:58.926156 | orchestrator | Monday 09 February 2026 03:32:58 +0000 (0:00:00.281) 0:00:32.879 ******* 2026-02-09 03:32:58.926164 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:32:58.926171 | orchestrator | 2026-02-09 03:32:58.926178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:32:58.926185 | orchestrator | Monday 09 February 2026 03:32:58 +0000 (0:00:00.315) 0:00:33.195 ******* 2026-02-09 03:32:58.926192 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-09 03:32:58.926199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-09 03:32:58.926217 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-09 03:32:58.926224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-09 03:32:58.926232 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-09 03:32:58.926247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-09 03:33:08.081785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-09 03:33:08.081929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-09 03:33:08.081961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-09 03:33:08.082004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-09 03:33:08.082178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-09 03:33:08.082203 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-09 03:33:08.082224 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-09 03:33:08.082245 | orchestrator | 2026-02-09 03:33:08.082269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.082293 | orchestrator | Monday 09 February 2026 03:32:58 +0000 (0:00:00.460) 0:00:33.655 ******* 2026-02-09 03:33:08.082313 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.082338 | orchestrator | 2026-02-09 03:33:08.082362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.082383 | orchestrator | Monday 09 February 2026 03:32:59 +0000 (0:00:00.246) 0:00:33.902 ******* 2026-02-09 03:33:08.082404 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.082425 | orchestrator | 2026-02-09 03:33:08.082448 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.082470 | orchestrator | Monday 09 February 2026 03:32:59 +0000 (0:00:00.216) 0:00:34.119 ******* 2026-02-09 03:33:08.082491 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.082514 | orchestrator | 2026-02-09 03:33:08.082538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.082558 | orchestrator | Monday 09 February 2026 03:32:59 +0000 (0:00:00.227) 0:00:34.346 ******* 2026-02-09 03:33:08.082579 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.082601 | orchestrator | 2026-02-09 03:33:08.082623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.082645 | orchestrator | Monday 09 February 2026 03:32:59 +0000 (0:00:00.204) 0:00:34.551 ******* 2026-02-09 03:33:08.082666 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.082687 | orchestrator | 2026-02-09 03:33:08.082706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.082725 | orchestrator | Monday 09 February 2026 03:33:00 +0000 (0:00:00.228) 0:00:34.780 ******* 2026-02-09 03:33:08.082775 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.082796 | orchestrator | 2026-02-09 03:33:08.082813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.082834 | orchestrator | Monday 09 February 2026 03:33:00 +0000 (0:00:00.214) 0:00:34.994 ******* 2026-02-09 03:33:08.082853 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.082873 | orchestrator | 2026-02-09 03:33:08.082891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.082911 | orchestrator | Monday 09 February 2026 03:33:01 +0000 (0:00:00.769) 0:00:35.763 ******* 2026-02-09 03:33:08.082930 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.082952 | orchestrator | 2026-02-09 03:33:08.082972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.082992 | orchestrator | Monday 09 February 2026 03:33:01 +0000 (0:00:00.206) 0:00:35.970 ******* 2026-02-09 03:33:08.083005 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d) 2026-02-09 03:33:08.083017 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d) 2026-02-09 03:33:08.083028 | orchestrator | 2026-02-09 03:33:08.083039 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.083049 | orchestrator | Monday 09 February 2026 03:33:01 +0000 (0:00:00.446) 0:00:36.416 ******* 2026-02-09 03:33:08.083060 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717) 2026-02-09 03:33:08.083071 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717) 2026-02-09 03:33:08.083082 | orchestrator | 2026-02-09 03:33:08.083126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.083144 | orchestrator | Monday 09 February 2026 03:33:02 +0000 (0:00:00.490) 0:00:36.907 ******* 2026-02-09 03:33:08.083155 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46) 2026-02-09 03:33:08.083166 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46) 2026-02-09 03:33:08.083177 | orchestrator | 2026-02-09 03:33:08.083188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.083198 | orchestrator | Monday 09 February 2026 03:33:02 +0000 (0:00:00.477) 0:00:37.385 ******* 2026-02-09 03:33:08.083210 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0) 2026-02-09 03:33:08.083221 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0) 2026-02-09 03:33:08.083232 | orchestrator | 2026-02-09 03:33:08.083242 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:33:08.083253 | orchestrator | Monday 09 February 2026 03:33:03 +0000 (0:00:00.512) 0:00:37.897 ******* 2026-02-09 03:33:08.083264 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-09 03:33:08.083274 | orchestrator | 2026-02-09 03:33:08.083285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083321 | orchestrator | Monday 09 February 2026 03:33:03 +0000 (0:00:00.520) 0:00:38.418 ******* 2026-02-09 03:33:08.083333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-09 03:33:08.083343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-09 03:33:08.083354 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-09 03:33:08.083375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-09 03:33:08.083386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-09 03:33:08.083397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-09 03:33:08.083419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-09 03:33:08.083430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-09 03:33:08.083441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-09 03:33:08.083451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-09 03:33:08.083462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-09 03:33:08.083472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-09 03:33:08.083483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-09 03:33:08.083493 | orchestrator | 2026-02-09 03:33:08.083502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083512 | orchestrator | Monday 09 February 2026 03:33:04 +0000 (0:00:00.414) 0:00:38.833 ******* 2026-02-09 03:33:08.083521 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.083531 | orchestrator | 2026-02-09 03:33:08.083540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083550 | orchestrator | Monday 09 February 2026 03:33:04 +0000 (0:00:00.256) 0:00:39.089 ******* 2026-02-09 03:33:08.083559 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.083569 | orchestrator | 2026-02-09 03:33:08.083578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083588 | orchestrator | Monday 09 February 2026 03:33:04 +0000 (0:00:00.230) 0:00:39.320 ******* 2026-02-09 03:33:08.083598 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.083607 | orchestrator | 2026-02-09 03:33:08.083617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083626 | orchestrator | Monday 09 February 2026 03:33:05 +0000 (0:00:00.728) 0:00:40.049 ******* 2026-02-09 03:33:08.083636 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.083645 | orchestrator | 2026-02-09 03:33:08.083655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083664 | orchestrator | Monday 09 February 2026 03:33:05 +0000 (0:00:00.229) 0:00:40.278 ******* 2026-02-09 03:33:08.083674 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.083683 | orchestrator | 2026-02-09 03:33:08.083693 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083702 | orchestrator | Monday 09 February 2026 03:33:05 +0000 (0:00:00.256) 0:00:40.534 ******* 2026-02-09 03:33:08.083712 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.083721 | orchestrator | 2026-02-09 03:33:08.083731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083740 | orchestrator | Monday 09 February 2026 03:33:06 +0000 (0:00:00.218) 0:00:40.752 ******* 2026-02-09 03:33:08.083750 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.083760 | orchestrator | 2026-02-09 03:33:08.083769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083779 | orchestrator | Monday 09 February 2026 03:33:06 +0000 (0:00:00.222) 0:00:40.975 ******* 2026-02-09 03:33:08.083788 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.083798 | orchestrator | 2026-02-09 03:33:08.083807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083817 | orchestrator | Monday 09 February 2026 03:33:06 +0000 (0:00:00.242) 0:00:41.217 ******* 2026-02-09 03:33:08.083826 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-09 03:33:08.083836 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-09 03:33:08.083846 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-09 03:33:08.083856 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-09 03:33:08.083865 | orchestrator | 2026-02-09 03:33:08.083881 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083891 | orchestrator | Monday 09 February 2026 03:33:07 +0000 (0:00:00.695) 0:00:41.912 ******* 2026-02-09 03:33:08.083900 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.083910 | orchestrator | 2026-02-09 03:33:08.083919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083929 | orchestrator | Monday 09 February 2026 03:33:07 +0000 (0:00:00.228) 0:00:42.140 ******* 2026-02-09 03:33:08.083938 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.083948 | orchestrator | 2026-02-09 03:33:08.083957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.083967 | orchestrator | Monday 09 February 2026 03:33:07 +0000 (0:00:00.221) 0:00:42.361 ******* 2026-02-09 03:33:08.083976 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.083986 | orchestrator | 2026-02-09 03:33:08.083995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:33:08.084005 | orchestrator | Monday 09 February 2026 03:33:07 +0000 (0:00:00.226) 0:00:42.588 ******* 2026-02-09 03:33:08.084014 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:08.084024 | orchestrator | 2026-02-09 03:33:08.084039 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-09 03:33:12.932465 | orchestrator | Monday 09 February 2026 03:33:08 +0000 (0:00:00.233) 0:00:42.821 ******* 2026-02-09 03:33:12.932541 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-09 03:33:12.932548 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-09 03:33:12.932552 | orchestrator | 2026-02-09 03:33:12.932557 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-09 03:33:12.932573 | orchestrator | Monday 09 February 2026 03:33:08 +0000 (0:00:00.418) 0:00:43.240 ******* 2026-02-09 03:33:12.932577 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932582 | orchestrator | 2026-02-09 03:33:12.932586 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-09 03:33:12.932590 | orchestrator | Monday 09 February 2026 03:33:08 +0000 (0:00:00.143) 0:00:43.384 ******* 2026-02-09 03:33:12.932594 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932597 | orchestrator | 2026-02-09 03:33:12.932601 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-09 03:33:12.932605 | orchestrator | Monday 09 February 2026 03:33:08 +0000 (0:00:00.174) 0:00:43.559 ******* 2026-02-09 03:33:12.932609 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932613 | orchestrator | 2026-02-09 03:33:12.932616 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-09 03:33:12.932620 | orchestrator | Monday 09 February 2026 03:33:08 +0000 (0:00:00.164) 0:00:43.723 ******* 2026-02-09 03:33:12.932624 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:33:12.932628 | orchestrator | 2026-02-09 03:33:12.932632 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-09 03:33:12.932636 | orchestrator | Monday 09 February 2026 03:33:09 +0000 (0:00:00.156) 0:00:43.880 ******* 2026-02-09 03:33:12.932640 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '46be6a4f-1579-5910-a72e-9190b5238c92'}}) 2026-02-09 03:33:12.932644 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fca1079b-480c-5ada-8652-888828a580b6'}}) 2026-02-09 03:33:12.932648 | orchestrator | 2026-02-09 03:33:12.932652 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-09 03:33:12.932656 | orchestrator | Monday 09 February 2026 03:33:09 +0000 (0:00:00.185) 0:00:44.066 ******* 2026-02-09 03:33:12.932660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '46be6a4f-1579-5910-a72e-9190b5238c92'}})  2026-02-09 03:33:12.932664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fca1079b-480c-5ada-8652-888828a580b6'}})  2026-02-09 03:33:12.932668 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932688 | orchestrator | 2026-02-09 03:33:12.932692 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-09 03:33:12.932696 | orchestrator | Monday 09 February 2026 03:33:09 +0000 (0:00:00.161) 0:00:44.227 ******* 2026-02-09 03:33:12.932700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '46be6a4f-1579-5910-a72e-9190b5238c92'}})  2026-02-09 03:33:12.932703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fca1079b-480c-5ada-8652-888828a580b6'}})  2026-02-09 03:33:12.932707 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932711 | orchestrator | 2026-02-09 03:33:12.932715 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-09 03:33:12.932718 | orchestrator | Monday 09 February 2026 03:33:09 +0000 (0:00:00.161) 0:00:44.388 ******* 2026-02-09 03:33:12.932722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '46be6a4f-1579-5910-a72e-9190b5238c92'}})  2026-02-09 03:33:12.932726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fca1079b-480c-5ada-8652-888828a580b6'}})  2026-02-09 03:33:12.932729 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932733 | orchestrator | 2026-02-09 03:33:12.932737 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-09 03:33:12.932740 | orchestrator | Monday 09 February 2026 03:33:09 +0000 (0:00:00.171) 0:00:44.560 ******* 2026-02-09 03:33:12.932744 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:33:12.932748 | orchestrator | 2026-02-09 03:33:12.932751 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-09 03:33:12.932755 | orchestrator | Monday 09 February 2026 03:33:09 +0000 (0:00:00.158) 0:00:44.718 ******* 2026-02-09 03:33:12.932759 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:33:12.932763 | orchestrator | 2026-02-09 03:33:12.932766 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-09 03:33:12.932770 | orchestrator | Monday 09 February 2026 03:33:10 +0000 (0:00:00.157) 0:00:44.876 ******* 2026-02-09 03:33:12.932774 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932777 | orchestrator | 2026-02-09 03:33:12.932781 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-09 03:33:12.932785 | orchestrator | Monday 09 February 2026 03:33:10 +0000 (0:00:00.420) 0:00:45.296 ******* 2026-02-09 03:33:12.932789 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932792 | orchestrator | 2026-02-09 03:33:12.932796 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-09 03:33:12.932800 | orchestrator | Monday 09 February 2026 03:33:10 +0000 (0:00:00.156) 0:00:45.453 ******* 2026-02-09 03:33:12.932803 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932807 | orchestrator | 2026-02-09 03:33:12.932811 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-09 03:33:12.932814 | orchestrator | Monday 09 February 2026 03:33:10 +0000 (0:00:00.146) 0:00:45.599 ******* 2026-02-09 03:33:12.932818 | orchestrator | ok: [testbed-node-5] => { 2026-02-09 03:33:12.932822 | orchestrator |  "ceph_osd_devices": { 2026-02-09 03:33:12.932826 | orchestrator |  "sdb": { 2026-02-09 03:33:12.932839 | orchestrator |  "osd_lvm_uuid": "46be6a4f-1579-5910-a72e-9190b5238c92" 2026-02-09 03:33:12.932844 | orchestrator |  }, 2026-02-09 03:33:12.932847 | orchestrator |  "sdc": { 2026-02-09 03:33:12.932851 | orchestrator |  "osd_lvm_uuid": "fca1079b-480c-5ada-8652-888828a580b6" 2026-02-09 03:33:12.932855 | orchestrator |  } 2026-02-09 03:33:12.932859 | orchestrator |  } 2026-02-09 03:33:12.932863 | orchestrator | } 2026-02-09 03:33:12.932867 | orchestrator | 2026-02-09 03:33:12.932874 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-09 03:33:12.932878 | orchestrator | Monday 09 February 2026 03:33:11 +0000 (0:00:00.166) 0:00:45.766 ******* 2026-02-09 03:33:12.932881 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932891 | orchestrator | 2026-02-09 03:33:12.932895 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-09 03:33:12.932898 | orchestrator | Monday 09 February 2026 03:33:11 +0000 (0:00:00.160) 0:00:45.927 ******* 2026-02-09 03:33:12.932902 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932906 | orchestrator | 2026-02-09 03:33:12.932910 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-09 03:33:12.932913 | orchestrator | Monday 09 February 2026 03:33:11 +0000 (0:00:00.177) 0:00:46.104 ******* 2026-02-09 03:33:12.932917 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:33:12.932921 | orchestrator | 2026-02-09 03:33:12.932924 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-09 03:33:12.932928 | orchestrator | Monday 09 February 2026 03:33:11 +0000 (0:00:00.144) 0:00:46.249 ******* 2026-02-09 03:33:12.932932 | orchestrator | changed: [testbed-node-5] => { 2026-02-09 03:33:12.932936 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-09 03:33:12.932940 | orchestrator |  "ceph_osd_devices": { 2026-02-09 03:33:12.932944 | orchestrator |  "sdb": { 2026-02-09 03:33:12.932947 | orchestrator |  "osd_lvm_uuid": "46be6a4f-1579-5910-a72e-9190b5238c92" 2026-02-09 03:33:12.932951 | orchestrator |  }, 2026-02-09 03:33:12.932955 | orchestrator |  "sdc": { 2026-02-09 03:33:12.932959 | orchestrator |  "osd_lvm_uuid": "fca1079b-480c-5ada-8652-888828a580b6" 2026-02-09 03:33:12.932962 | orchestrator |  } 2026-02-09 03:33:12.932966 | orchestrator |  }, 2026-02-09 03:33:12.932970 | orchestrator |  "lvm_volumes": [ 2026-02-09 03:33:12.932974 | orchestrator |  { 2026-02-09 03:33:12.932978 | orchestrator |  "data": "osd-block-46be6a4f-1579-5910-a72e-9190b5238c92", 2026-02-09 03:33:12.932982 | orchestrator |  "data_vg": "ceph-46be6a4f-1579-5910-a72e-9190b5238c92" 2026-02-09 03:33:12.932985 | orchestrator |  }, 2026-02-09 03:33:12.932989 | orchestrator |  { 2026-02-09 03:33:12.932993 | orchestrator |  "data": "osd-block-fca1079b-480c-5ada-8652-888828a580b6", 2026-02-09 03:33:12.932996 | orchestrator |  "data_vg": "ceph-fca1079b-480c-5ada-8652-888828a580b6" 2026-02-09 03:33:12.933000 | orchestrator |  } 2026-02-09 03:33:12.933004 | orchestrator |  ] 2026-02-09 03:33:12.933008 | orchestrator |  } 2026-02-09 03:33:12.933011 | orchestrator | } 2026-02-09 03:33:12.933015 | orchestrator | 2026-02-09 03:33:12.933019 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-09 03:33:12.933023 | orchestrator | Monday 09 February 2026 03:33:11 +0000 (0:00:00.264) 0:00:46.514 ******* 2026-02-09 03:33:12.933026 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-09 03:33:12.933031 | orchestrator | 2026-02-09 03:33:12.933035 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:33:12.933040 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-09 03:33:12.933045 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-09 03:33:12.933050 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-09 03:33:12.933054 | orchestrator | 2026-02-09 03:33:12.933059 | orchestrator | 2026-02-09 03:33:12.933063 | orchestrator | 2026-02-09 03:33:12.933067 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:33:12.933072 | orchestrator | Monday 09 February 2026 03:33:12 +0000 (0:00:01.139) 0:00:47.654 ******* 2026-02-09 03:33:12.933076 | orchestrator | =============================================================================== 2026-02-09 03:33:12.933080 | orchestrator | Write configuration file ------------------------------------------------ 4.71s 2026-02-09 03:33:12.933146 | orchestrator | Add known links to the list of available block devices ------------------ 1.46s 2026-02-09 03:33:12.933150 | orchestrator | Add known partitions to the list of available block devices ------------- 1.25s 2026-02-09 03:33:12.933155 | orchestrator | Add known partitions to the list of available block devices ------------- 1.14s 2026-02-09 03:33:12.933159 | orchestrator | Print configuration data ------------------------------------------------ 1.06s 2026-02-09 03:33:12.933164 | orchestrator | Set DB devices config data ---------------------------------------------- 0.96s 2026-02-09 03:33:12.933168 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2026-02-09 03:33:12.933172 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2026-02-09 03:33:12.933177 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-02-09 03:33:12.933181 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.87s 2026-02-09 03:33:12.933186 | orchestrator | Get initial list of available block devices ----------------------------- 0.86s 2026-02-09 03:33:12.933190 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.85s 2026-02-09 03:33:12.933195 | orchestrator | Add known links to the list of available block devices ------------------ 0.77s 2026-02-09 03:33:12.933202 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2026-02-09 03:33:13.453771 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.73s 2026-02-09 03:33:13.453876 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-02-09 03:33:13.453891 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2026-02-09 03:33:13.453922 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-02-09 03:33:13.453934 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-02-09 03:33:13.453945 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-02-09 03:33:36.244477 | orchestrator | 2026-02-09 03:33:36 | INFO  | Task fd3adb23-0d22-423f-8773-9ea838614f7f (sync inventory) is running in background. Output coming soon. 2026-02-09 03:34:06.492653 | orchestrator | 2026-02-09 03:33:37 | INFO  | Starting group_vars file reorganization 2026-02-09 03:34:06.492735 | orchestrator | 2026-02-09 03:33:37 | INFO  | Moved 0 file(s) to their respective directories 2026-02-09 03:34:06.492743 | orchestrator | 2026-02-09 03:33:37 | INFO  | Group_vars file reorganization completed 2026-02-09 03:34:06.492748 | orchestrator | 2026-02-09 03:33:40 | INFO  | Starting variable preparation from inventory 2026-02-09 03:34:06.492753 | orchestrator | 2026-02-09 03:33:44 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-09 03:34:06.492759 | orchestrator | 2026-02-09 03:33:44 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-09 03:34:06.492763 | orchestrator | 2026-02-09 03:33:44 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-09 03:34:06.492768 | orchestrator | 2026-02-09 03:33:44 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-09 03:34:06.492773 | orchestrator | 2026-02-09 03:33:44 | INFO  | Variable preparation completed 2026-02-09 03:34:06.492778 | orchestrator | 2026-02-09 03:33:45 | INFO  | Starting inventory overwrite handling 2026-02-09 03:34:06.492783 | orchestrator | 2026-02-09 03:33:45 | INFO  | Handling group overwrites in 99-overwrite 2026-02-09 03:34:06.492788 | orchestrator | 2026-02-09 03:33:45 | INFO  | Removing group frr:children from 60-generic 2026-02-09 03:34:06.492792 | orchestrator | 2026-02-09 03:33:45 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-09 03:34:06.492797 | orchestrator | 2026-02-09 03:33:45 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-09 03:34:06.492821 | orchestrator | 2026-02-09 03:33:45 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-09 03:34:06.492826 | orchestrator | 2026-02-09 03:33:45 | INFO  | Handling group overwrites in 20-roles 2026-02-09 03:34:06.492830 | orchestrator | 2026-02-09 03:33:45 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-09 03:34:06.492835 | orchestrator | 2026-02-09 03:33:45 | INFO  | Removed 5 group(s) in total 2026-02-09 03:34:06.492839 | orchestrator | 2026-02-09 03:33:45 | INFO  | Inventory overwrite handling completed 2026-02-09 03:34:06.492844 | orchestrator | 2026-02-09 03:33:47 | INFO  | Starting merge of inventory files 2026-02-09 03:34:06.492849 | orchestrator | 2026-02-09 03:33:47 | INFO  | Inventory files merged successfully 2026-02-09 03:34:06.492853 | orchestrator | 2026-02-09 03:33:52 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-09 03:34:06.492858 | orchestrator | 2026-02-09 03:34:05 | INFO  | Successfully wrote ClusterShell configuration 2026-02-09 03:34:06.492863 | orchestrator | [master 853a361] 2026-02-09-03-34 2026-02-09 03:34:06.492869 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-09 03:34:08.886952 | orchestrator | 2026-02-09 03:34:08 | INFO  | Task 20bdb8d3-4de7-48d5-8a54-e8cac65dbc7c (ceph-create-lvm-devices) was prepared for execution. 2026-02-09 03:34:08.887020 | orchestrator | 2026-02-09 03:34:08 | INFO  | It takes a moment until task 20bdb8d3-4de7-48d5-8a54-e8cac65dbc7c (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-09 03:34:21.871395 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-09 03:34:21.871501 | orchestrator | 2.16.14 2026-02-09 03:34:21.871518 | orchestrator | 2026-02-09 03:34:21.871531 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-09 03:34:21.871544 | orchestrator | 2026-02-09 03:34:21.871555 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-09 03:34:21.871566 | orchestrator | Monday 09 February 2026 03:34:13 +0000 (0:00:00.323) 0:00:00.323 ******* 2026-02-09 03:34:21.871577 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-09 03:34:21.871588 | orchestrator | 2026-02-09 03:34:21.871599 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-09 03:34:21.871610 | orchestrator | Monday 09 February 2026 03:34:13 +0000 (0:00:00.252) 0:00:00.576 ******* 2026-02-09 03:34:21.871621 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:34:21.871632 | orchestrator | 2026-02-09 03:34:21.871643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.871654 | orchestrator | Monday 09 February 2026 03:34:14 +0000 (0:00:00.266) 0:00:00.843 ******* 2026-02-09 03:34:21.871664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-09 03:34:21.871675 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-09 03:34:21.871702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-09 03:34:21.871713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-09 03:34:21.871724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-09 03:34:21.871735 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-09 03:34:21.871746 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-09 03:34:21.871756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-09 03:34:21.871767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-09 03:34:21.871778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-09 03:34:21.871812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-09 03:34:21.871824 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-09 03:34:21.871835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-09 03:34:21.871845 | orchestrator | 2026-02-09 03:34:21.871856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.871867 | orchestrator | Monday 09 February 2026 03:34:14 +0000 (0:00:00.580) 0:00:01.423 ******* 2026-02-09 03:34:21.871877 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.871888 | orchestrator | 2026-02-09 03:34:21.871899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.871909 | orchestrator | Monday 09 February 2026 03:34:14 +0000 (0:00:00.207) 0:00:01.631 ******* 2026-02-09 03:34:21.871920 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.871930 | orchestrator | 2026-02-09 03:34:21.871941 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.871952 | orchestrator | Monday 09 February 2026 03:34:15 +0000 (0:00:00.219) 0:00:01.850 ******* 2026-02-09 03:34:21.871962 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.871973 | orchestrator | 2026-02-09 03:34:21.871983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.871994 | orchestrator | Monday 09 February 2026 03:34:15 +0000 (0:00:00.230) 0:00:02.081 ******* 2026-02-09 03:34:21.872004 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.872015 | orchestrator | 2026-02-09 03:34:21.872025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.872117 | orchestrator | Monday 09 February 2026 03:34:15 +0000 (0:00:00.261) 0:00:02.343 ******* 2026-02-09 03:34:21.872130 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.872141 | orchestrator | 2026-02-09 03:34:21.872152 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.872163 | orchestrator | Monday 09 February 2026 03:34:15 +0000 (0:00:00.228) 0:00:02.572 ******* 2026-02-09 03:34:21.872173 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.872184 | orchestrator | 2026-02-09 03:34:21.872195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.872205 | orchestrator | Monday 09 February 2026 03:34:16 +0000 (0:00:00.239) 0:00:02.811 ******* 2026-02-09 03:34:21.872216 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.872226 | orchestrator | 2026-02-09 03:34:21.872237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.872248 | orchestrator | Monday 09 February 2026 03:34:16 +0000 (0:00:00.250) 0:00:03.061 ******* 2026-02-09 03:34:21.872258 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.872269 | orchestrator | 2026-02-09 03:34:21.872279 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.872290 | orchestrator | Monday 09 February 2026 03:34:16 +0000 (0:00:00.212) 0:00:03.274 ******* 2026-02-09 03:34:21.872301 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8) 2026-02-09 03:34:21.872313 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8) 2026-02-09 03:34:21.872324 | orchestrator | 2026-02-09 03:34:21.872335 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.872365 | orchestrator | Monday 09 February 2026 03:34:16 +0000 (0:00:00.435) 0:00:03.710 ******* 2026-02-09 03:34:21.872377 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d) 2026-02-09 03:34:21.872388 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d) 2026-02-09 03:34:21.872399 | orchestrator | 2026-02-09 03:34:21.872409 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.872429 | orchestrator | Monday 09 February 2026 03:34:17 +0000 (0:00:00.706) 0:00:04.416 ******* 2026-02-09 03:34:21.872440 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4) 2026-02-09 03:34:21.872451 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4) 2026-02-09 03:34:21.872462 | orchestrator | 2026-02-09 03:34:21.872472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.872483 | orchestrator | Monday 09 February 2026 03:34:18 +0000 (0:00:00.750) 0:00:05.167 ******* 2026-02-09 03:34:21.872494 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa) 2026-02-09 03:34:21.872511 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa) 2026-02-09 03:34:21.872522 | orchestrator | 2026-02-09 03:34:21.872533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:21.872545 | orchestrator | Monday 09 February 2026 03:34:19 +0000 (0:00:00.975) 0:00:06.143 ******* 2026-02-09 03:34:21.872572 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-09 03:34:21.872583 | orchestrator | 2026-02-09 03:34:21.872594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:21.872615 | orchestrator | Monday 09 February 2026 03:34:19 +0000 (0:00:00.377) 0:00:06.520 ******* 2026-02-09 03:34:21.872626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-09 03:34:21.872637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-09 03:34:21.872648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-09 03:34:21.872658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-09 03:34:21.872669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-09 03:34:21.872679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-09 03:34:21.872690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-09 03:34:21.872700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-09 03:34:21.872711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-09 03:34:21.872721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-09 03:34:21.872732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-09 03:34:21.872743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-09 03:34:21.872753 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-09 03:34:21.872764 | orchestrator | 2026-02-09 03:34:21.872775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:21.872785 | orchestrator | Monday 09 February 2026 03:34:20 +0000 (0:00:00.450) 0:00:06.971 ******* 2026-02-09 03:34:21.872796 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.872806 | orchestrator | 2026-02-09 03:34:21.872817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:21.872827 | orchestrator | Monday 09 February 2026 03:34:20 +0000 (0:00:00.242) 0:00:07.213 ******* 2026-02-09 03:34:21.872838 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.872849 | orchestrator | 2026-02-09 03:34:21.872859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:21.872870 | orchestrator | Monday 09 February 2026 03:34:20 +0000 (0:00:00.255) 0:00:07.469 ******* 2026-02-09 03:34:21.872881 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.872898 | orchestrator | 2026-02-09 03:34:21.872909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:21.872920 | orchestrator | Monday 09 February 2026 03:34:20 +0000 (0:00:00.226) 0:00:07.695 ******* 2026-02-09 03:34:21.872942 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.872954 | orchestrator | 2026-02-09 03:34:21.872974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:21.872986 | orchestrator | Monday 09 February 2026 03:34:21 +0000 (0:00:00.237) 0:00:07.933 ******* 2026-02-09 03:34:21.872996 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.873007 | orchestrator | 2026-02-09 03:34:21.873018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:21.873028 | orchestrator | Monday 09 February 2026 03:34:21 +0000 (0:00:00.243) 0:00:08.176 ******* 2026-02-09 03:34:21.873069 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.873087 | orchestrator | 2026-02-09 03:34:21.873106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:21.873125 | orchestrator | Monday 09 February 2026 03:34:21 +0000 (0:00:00.220) 0:00:08.397 ******* 2026-02-09 03:34:21.873143 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:21.873155 | orchestrator | 2026-02-09 03:34:21.873172 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:30.492559 | orchestrator | Monday 09 February 2026 03:34:21 +0000 (0:00:00.245) 0:00:08.642 ******* 2026-02-09 03:34:30.492654 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.492666 | orchestrator | 2026-02-09 03:34:30.492674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:30.492682 | orchestrator | Monday 09 February 2026 03:34:22 +0000 (0:00:00.745) 0:00:09.388 ******* 2026-02-09 03:34:30.492689 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-09 03:34:30.492696 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-09 03:34:30.492703 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-09 03:34:30.492710 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-09 03:34:30.492717 | orchestrator | 2026-02-09 03:34:30.492725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:30.492731 | orchestrator | Monday 09 February 2026 03:34:23 +0000 (0:00:00.713) 0:00:10.102 ******* 2026-02-09 03:34:30.492737 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.492743 | orchestrator | 2026-02-09 03:34:30.492750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:30.492758 | orchestrator | Monday 09 February 2026 03:34:23 +0000 (0:00:00.209) 0:00:10.311 ******* 2026-02-09 03:34:30.492764 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.492771 | orchestrator | 2026-02-09 03:34:30.492791 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:30.492798 | orchestrator | Monday 09 February 2026 03:34:23 +0000 (0:00:00.208) 0:00:10.520 ******* 2026-02-09 03:34:30.492804 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.492809 | orchestrator | 2026-02-09 03:34:30.492815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:30.492821 | orchestrator | Monday 09 February 2026 03:34:23 +0000 (0:00:00.216) 0:00:10.737 ******* 2026-02-09 03:34:30.492828 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.492834 | orchestrator | 2026-02-09 03:34:30.492840 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-09 03:34:30.492847 | orchestrator | Monday 09 February 2026 03:34:24 +0000 (0:00:00.233) 0:00:10.970 ******* 2026-02-09 03:34:30.492853 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.492859 | orchestrator | 2026-02-09 03:34:30.492865 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-09 03:34:30.492871 | orchestrator | Monday 09 February 2026 03:34:24 +0000 (0:00:00.149) 0:00:11.120 ******* 2026-02-09 03:34:30.492878 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '709cc28b-6adb-555a-83e9-344e81441f7b'}}) 2026-02-09 03:34:30.492906 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '244f969e-c6c5-5568-af21-d52fe589178d'}}) 2026-02-09 03:34:30.492913 | orchestrator | 2026-02-09 03:34:30.492919 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-09 03:34:30.492926 | orchestrator | Monday 09 February 2026 03:34:24 +0000 (0:00:00.202) 0:00:11.322 ******* 2026-02-09 03:34:30.492933 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}) 2026-02-09 03:34:30.492940 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}) 2026-02-09 03:34:30.492945 | orchestrator | 2026-02-09 03:34:30.492950 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-09 03:34:30.492956 | orchestrator | Monday 09 February 2026 03:34:26 +0000 (0:00:02.013) 0:00:13.336 ******* 2026-02-09 03:34:30.492961 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:30.492969 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:30.492975 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.492980 | orchestrator | 2026-02-09 03:34:30.492986 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-09 03:34:30.492992 | orchestrator | Monday 09 February 2026 03:34:26 +0000 (0:00:00.168) 0:00:13.504 ******* 2026-02-09 03:34:30.492998 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}) 2026-02-09 03:34:30.493004 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}) 2026-02-09 03:34:30.493009 | orchestrator | 2026-02-09 03:34:30.493014 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-09 03:34:30.493020 | orchestrator | Monday 09 February 2026 03:34:28 +0000 (0:00:01.493) 0:00:14.998 ******* 2026-02-09 03:34:30.493025 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:30.493082 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:30.493087 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.493093 | orchestrator | 2026-02-09 03:34:30.493099 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-09 03:34:30.493105 | orchestrator | Monday 09 February 2026 03:34:28 +0000 (0:00:00.176) 0:00:15.175 ******* 2026-02-09 03:34:30.493127 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.493133 | orchestrator | 2026-02-09 03:34:30.493139 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-09 03:34:30.493145 | orchestrator | Monday 09 February 2026 03:34:28 +0000 (0:00:00.417) 0:00:15.593 ******* 2026-02-09 03:34:30.493151 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:30.493157 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:30.493163 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.493169 | orchestrator | 2026-02-09 03:34:30.493175 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-09 03:34:30.493180 | orchestrator | Monday 09 February 2026 03:34:28 +0000 (0:00:00.176) 0:00:15.769 ******* 2026-02-09 03:34:30.493195 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.493201 | orchestrator | 2026-02-09 03:34:30.493208 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-09 03:34:30.493214 | orchestrator | Monday 09 February 2026 03:34:29 +0000 (0:00:00.141) 0:00:15.910 ******* 2026-02-09 03:34:30.493227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:30.493234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:30.493239 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.493246 | orchestrator | 2026-02-09 03:34:30.493253 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-09 03:34:30.493259 | orchestrator | Monday 09 February 2026 03:34:29 +0000 (0:00:00.179) 0:00:16.090 ******* 2026-02-09 03:34:30.493265 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.493270 | orchestrator | 2026-02-09 03:34:30.493278 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-09 03:34:30.493287 | orchestrator | Monday 09 February 2026 03:34:29 +0000 (0:00:00.143) 0:00:16.234 ******* 2026-02-09 03:34:30.493295 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:30.493304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:30.493311 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.493317 | orchestrator | 2026-02-09 03:34:30.493324 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-09 03:34:30.493330 | orchestrator | Monday 09 February 2026 03:34:29 +0000 (0:00:00.172) 0:00:16.407 ******* 2026-02-09 03:34:30.493337 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:34:30.493345 | orchestrator | 2026-02-09 03:34:30.493352 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-09 03:34:30.493358 | orchestrator | Monday 09 February 2026 03:34:29 +0000 (0:00:00.168) 0:00:16.575 ******* 2026-02-09 03:34:30.493365 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:30.493371 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:30.493377 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.493384 | orchestrator | 2026-02-09 03:34:30.493389 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-09 03:34:30.493395 | orchestrator | Monday 09 February 2026 03:34:29 +0000 (0:00:00.158) 0:00:16.734 ******* 2026-02-09 03:34:30.493401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:30.493407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:30.493413 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.493419 | orchestrator | 2026-02-09 03:34:30.493426 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-09 03:34:30.493431 | orchestrator | Monday 09 February 2026 03:34:30 +0000 (0:00:00.155) 0:00:16.889 ******* 2026-02-09 03:34:30.493437 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:30.493444 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:30.493456 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.493461 | orchestrator | 2026-02-09 03:34:30.493468 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-09 03:34:30.493474 | orchestrator | Monday 09 February 2026 03:34:30 +0000 (0:00:00.211) 0:00:17.100 ******* 2026-02-09 03:34:30.493481 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:30.493486 | orchestrator | 2026-02-09 03:34:30.493492 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-09 03:34:30.493506 | orchestrator | Monday 09 February 2026 03:34:30 +0000 (0:00:00.167) 0:00:17.268 ******* 2026-02-09 03:34:37.722172 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722269 | orchestrator | 2026-02-09 03:34:37.722281 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-09 03:34:37.722290 | orchestrator | Monday 09 February 2026 03:34:30 +0000 (0:00:00.141) 0:00:17.410 ******* 2026-02-09 03:34:37.722297 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722305 | orchestrator | 2026-02-09 03:34:37.722312 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-09 03:34:37.722319 | orchestrator | Monday 09 February 2026 03:34:30 +0000 (0:00:00.361) 0:00:17.771 ******* 2026-02-09 03:34:37.722326 | orchestrator | ok: [testbed-node-3] => { 2026-02-09 03:34:37.722334 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-09 03:34:37.722341 | orchestrator | } 2026-02-09 03:34:37.722348 | orchestrator | 2026-02-09 03:34:37.722355 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-09 03:34:37.722361 | orchestrator | Monday 09 February 2026 03:34:31 +0000 (0:00:00.154) 0:00:17.926 ******* 2026-02-09 03:34:37.722368 | orchestrator | ok: [testbed-node-3] => { 2026-02-09 03:34:37.722375 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-09 03:34:37.722381 | orchestrator | } 2026-02-09 03:34:37.722388 | orchestrator | 2026-02-09 03:34:37.722395 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-09 03:34:37.722416 | orchestrator | Monday 09 February 2026 03:34:31 +0000 (0:00:00.165) 0:00:18.091 ******* 2026-02-09 03:34:37.722423 | orchestrator | ok: [testbed-node-3] => { 2026-02-09 03:34:37.722430 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-09 03:34:37.722436 | orchestrator | } 2026-02-09 03:34:37.722443 | orchestrator | 2026-02-09 03:34:37.722450 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-09 03:34:37.722456 | orchestrator | Monday 09 February 2026 03:34:31 +0000 (0:00:00.159) 0:00:18.251 ******* 2026-02-09 03:34:37.722463 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:34:37.722470 | orchestrator | 2026-02-09 03:34:37.722476 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-09 03:34:37.722483 | orchestrator | Monday 09 February 2026 03:34:32 +0000 (0:00:00.670) 0:00:18.921 ******* 2026-02-09 03:34:37.722490 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:34:37.722496 | orchestrator | 2026-02-09 03:34:37.722503 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-09 03:34:37.722510 | orchestrator | Monday 09 February 2026 03:34:32 +0000 (0:00:00.531) 0:00:19.453 ******* 2026-02-09 03:34:37.722516 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:34:37.722523 | orchestrator | 2026-02-09 03:34:37.722529 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-09 03:34:37.722536 | orchestrator | Monday 09 February 2026 03:34:33 +0000 (0:00:00.509) 0:00:19.962 ******* 2026-02-09 03:34:37.722542 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:34:37.722550 | orchestrator | 2026-02-09 03:34:37.722559 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-09 03:34:37.722566 | orchestrator | Monday 09 February 2026 03:34:33 +0000 (0:00:00.160) 0:00:20.122 ******* 2026-02-09 03:34:37.722574 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722581 | orchestrator | 2026-02-09 03:34:37.722589 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-09 03:34:37.722615 | orchestrator | Monday 09 February 2026 03:34:33 +0000 (0:00:00.123) 0:00:20.245 ******* 2026-02-09 03:34:37.722623 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722631 | orchestrator | 2026-02-09 03:34:37.722637 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-09 03:34:37.722644 | orchestrator | Monday 09 February 2026 03:34:33 +0000 (0:00:00.128) 0:00:20.374 ******* 2026-02-09 03:34:37.722650 | orchestrator | ok: [testbed-node-3] => { 2026-02-09 03:34:37.722657 | orchestrator |  "vgs_report": { 2026-02-09 03:34:37.722664 | orchestrator |  "vg": [] 2026-02-09 03:34:37.722670 | orchestrator |  } 2026-02-09 03:34:37.722677 | orchestrator | } 2026-02-09 03:34:37.722684 | orchestrator | 2026-02-09 03:34:37.722691 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-09 03:34:37.722698 | orchestrator | Monday 09 February 2026 03:34:33 +0000 (0:00:00.162) 0:00:20.536 ******* 2026-02-09 03:34:37.722704 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722711 | orchestrator | 2026-02-09 03:34:37.722718 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-09 03:34:37.722724 | orchestrator | Monday 09 February 2026 03:34:33 +0000 (0:00:00.149) 0:00:20.686 ******* 2026-02-09 03:34:37.722731 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722737 | orchestrator | 2026-02-09 03:34:37.722744 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-09 03:34:37.722750 | orchestrator | Monday 09 February 2026 03:34:34 +0000 (0:00:00.384) 0:00:21.070 ******* 2026-02-09 03:34:37.722757 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722763 | orchestrator | 2026-02-09 03:34:37.722770 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-09 03:34:37.722776 | orchestrator | Monday 09 February 2026 03:34:34 +0000 (0:00:00.168) 0:00:21.239 ******* 2026-02-09 03:34:37.722783 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722789 | orchestrator | 2026-02-09 03:34:37.722796 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-09 03:34:37.722802 | orchestrator | Monday 09 February 2026 03:34:34 +0000 (0:00:00.183) 0:00:21.423 ******* 2026-02-09 03:34:37.722809 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722815 | orchestrator | 2026-02-09 03:34:37.722822 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-09 03:34:37.722830 | orchestrator | Monday 09 February 2026 03:34:34 +0000 (0:00:00.157) 0:00:21.580 ******* 2026-02-09 03:34:37.722840 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722851 | orchestrator | 2026-02-09 03:34:37.722863 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-09 03:34:37.722873 | orchestrator | Monday 09 February 2026 03:34:34 +0000 (0:00:00.162) 0:00:21.743 ******* 2026-02-09 03:34:37.722883 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722893 | orchestrator | 2026-02-09 03:34:37.722904 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-09 03:34:37.722915 | orchestrator | Monday 09 February 2026 03:34:35 +0000 (0:00:00.172) 0:00:21.915 ******* 2026-02-09 03:34:37.722945 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722957 | orchestrator | 2026-02-09 03:34:37.722967 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-09 03:34:37.722974 | orchestrator | Monday 09 February 2026 03:34:35 +0000 (0:00:00.153) 0:00:22.069 ******* 2026-02-09 03:34:37.722981 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.722987 | orchestrator | 2026-02-09 03:34:37.722994 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-09 03:34:37.723001 | orchestrator | Monday 09 February 2026 03:34:35 +0000 (0:00:00.169) 0:00:22.239 ******* 2026-02-09 03:34:37.723007 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.723014 | orchestrator | 2026-02-09 03:34:37.723037 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-09 03:34:37.723050 | orchestrator | Monday 09 February 2026 03:34:35 +0000 (0:00:00.152) 0:00:22.391 ******* 2026-02-09 03:34:37.723065 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.723072 | orchestrator | 2026-02-09 03:34:37.723079 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-09 03:34:37.723085 | orchestrator | Monday 09 February 2026 03:34:35 +0000 (0:00:00.142) 0:00:22.533 ******* 2026-02-09 03:34:37.723092 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.723098 | orchestrator | 2026-02-09 03:34:37.723110 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-09 03:34:37.723117 | orchestrator | Monday 09 February 2026 03:34:35 +0000 (0:00:00.156) 0:00:22.690 ******* 2026-02-09 03:34:37.723123 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.723130 | orchestrator | 2026-02-09 03:34:37.723136 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-09 03:34:37.723143 | orchestrator | Monday 09 February 2026 03:34:36 +0000 (0:00:00.188) 0:00:22.879 ******* 2026-02-09 03:34:37.723149 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.723156 | orchestrator | 2026-02-09 03:34:37.723162 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-09 03:34:37.723169 | orchestrator | Monday 09 February 2026 03:34:36 +0000 (0:00:00.507) 0:00:23.387 ******* 2026-02-09 03:34:37.723177 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:37.723186 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:37.723192 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.723199 | orchestrator | 2026-02-09 03:34:37.723206 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-09 03:34:37.723212 | orchestrator | Monday 09 February 2026 03:34:36 +0000 (0:00:00.158) 0:00:23.545 ******* 2026-02-09 03:34:37.723219 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:37.723226 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:37.723233 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.723239 | orchestrator | 2026-02-09 03:34:37.723246 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-09 03:34:37.723252 | orchestrator | Monday 09 February 2026 03:34:36 +0000 (0:00:00.205) 0:00:23.750 ******* 2026-02-09 03:34:37.723259 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:37.723266 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:37.723272 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.723279 | orchestrator | 2026-02-09 03:34:37.723285 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-09 03:34:37.723292 | orchestrator | Monday 09 February 2026 03:34:37 +0000 (0:00:00.188) 0:00:23.939 ******* 2026-02-09 03:34:37.723299 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:37.723305 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:37.723312 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.723318 | orchestrator | 2026-02-09 03:34:37.723325 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-09 03:34:37.723332 | orchestrator | Monday 09 February 2026 03:34:37 +0000 (0:00:00.171) 0:00:24.110 ******* 2026-02-09 03:34:37.723343 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:37.723350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:37.723356 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:37.723363 | orchestrator | 2026-02-09 03:34:37.723370 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-09 03:34:37.723376 | orchestrator | Monday 09 February 2026 03:34:37 +0000 (0:00:00.199) 0:00:24.310 ******* 2026-02-09 03:34:37.723390 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:43.532373 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:43.532462 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:43.532470 | orchestrator | 2026-02-09 03:34:43.532476 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-09 03:34:43.532481 | orchestrator | Monday 09 February 2026 03:34:37 +0000 (0:00:00.187) 0:00:24.498 ******* 2026-02-09 03:34:43.532486 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:43.532516 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:43.532521 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:43.532525 | orchestrator | 2026-02-09 03:34:43.532540 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-09 03:34:43.532545 | orchestrator | Monday 09 February 2026 03:34:37 +0000 (0:00:00.172) 0:00:24.671 ******* 2026-02-09 03:34:43.532549 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:43.532553 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:43.532558 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:43.532562 | orchestrator | 2026-02-09 03:34:43.532566 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-09 03:34:43.532570 | orchestrator | Monday 09 February 2026 03:34:38 +0000 (0:00:00.165) 0:00:24.836 ******* 2026-02-09 03:34:43.532574 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:34:43.532578 | orchestrator | 2026-02-09 03:34:43.532582 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-09 03:34:43.532586 | orchestrator | Monday 09 February 2026 03:34:38 +0000 (0:00:00.525) 0:00:25.361 ******* 2026-02-09 03:34:43.532590 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:34:43.532593 | orchestrator | 2026-02-09 03:34:43.532597 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-09 03:34:43.532601 | orchestrator | Monday 09 February 2026 03:34:39 +0000 (0:00:00.532) 0:00:25.894 ******* 2026-02-09 03:34:43.532605 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:34:43.532609 | orchestrator | 2026-02-09 03:34:43.532612 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-09 03:34:43.532617 | orchestrator | Monday 09 February 2026 03:34:39 +0000 (0:00:00.180) 0:00:26.074 ******* 2026-02-09 03:34:43.532621 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'vg_name': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}) 2026-02-09 03:34:43.532626 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'vg_name': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}) 2026-02-09 03:34:43.532641 | orchestrator | 2026-02-09 03:34:43.532646 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-09 03:34:43.532649 | orchestrator | Monday 09 February 2026 03:34:39 +0000 (0:00:00.215) 0:00:26.289 ******* 2026-02-09 03:34:43.532653 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:43.532657 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:43.532661 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:43.532664 | orchestrator | 2026-02-09 03:34:43.532668 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-09 03:34:43.532672 | orchestrator | Monday 09 February 2026 03:34:39 +0000 (0:00:00.489) 0:00:26.779 ******* 2026-02-09 03:34:43.532676 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:43.532680 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:43.532683 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:43.532687 | orchestrator | 2026-02-09 03:34:43.532691 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-09 03:34:43.532695 | orchestrator | Monday 09 February 2026 03:34:40 +0000 (0:00:00.188) 0:00:26.968 ******* 2026-02-09 03:34:43.532698 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 03:34:43.532702 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 03:34:43.532706 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:34:43.532710 | orchestrator | 2026-02-09 03:34:43.532714 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-09 03:34:43.532717 | orchestrator | Monday 09 February 2026 03:34:40 +0000 (0:00:00.208) 0:00:27.176 ******* 2026-02-09 03:34:43.532733 | orchestrator | ok: [testbed-node-3] => { 2026-02-09 03:34:43.532737 | orchestrator |  "lvm_report": { 2026-02-09 03:34:43.532741 | orchestrator |  "lv": [ 2026-02-09 03:34:43.532745 | orchestrator |  { 2026-02-09 03:34:43.532749 | orchestrator |  "lv_name": "osd-block-244f969e-c6c5-5568-af21-d52fe589178d", 2026-02-09 03:34:43.532754 | orchestrator |  "vg_name": "ceph-244f969e-c6c5-5568-af21-d52fe589178d" 2026-02-09 03:34:43.532759 | orchestrator |  }, 2026-02-09 03:34:43.532765 | orchestrator |  { 2026-02-09 03:34:43.532772 | orchestrator |  "lv_name": "osd-block-709cc28b-6adb-555a-83e9-344e81441f7b", 2026-02-09 03:34:43.532778 | orchestrator |  "vg_name": "ceph-709cc28b-6adb-555a-83e9-344e81441f7b" 2026-02-09 03:34:43.532784 | orchestrator |  } 2026-02-09 03:34:43.532790 | orchestrator |  ], 2026-02-09 03:34:43.532796 | orchestrator |  "pv": [ 2026-02-09 03:34:43.532802 | orchestrator |  { 2026-02-09 03:34:43.532808 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-09 03:34:43.532815 | orchestrator |  "vg_name": "ceph-709cc28b-6adb-555a-83e9-344e81441f7b" 2026-02-09 03:34:43.532821 | orchestrator |  }, 2026-02-09 03:34:43.532827 | orchestrator |  { 2026-02-09 03:34:43.532838 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-09 03:34:43.532845 | orchestrator |  "vg_name": "ceph-244f969e-c6c5-5568-af21-d52fe589178d" 2026-02-09 03:34:43.532851 | orchestrator |  } 2026-02-09 03:34:43.532858 | orchestrator |  ] 2026-02-09 03:34:43.532865 | orchestrator |  } 2026-02-09 03:34:43.532872 | orchestrator | } 2026-02-09 03:34:43.532886 | orchestrator | 2026-02-09 03:34:43.532892 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-09 03:34:43.532899 | orchestrator | 2026-02-09 03:34:43.532906 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-09 03:34:43.532913 | orchestrator | Monday 09 February 2026 03:34:40 +0000 (0:00:00.332) 0:00:27.509 ******* 2026-02-09 03:34:43.532919 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-09 03:34:43.532926 | orchestrator | 2026-02-09 03:34:43.532933 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-09 03:34:43.532940 | orchestrator | Monday 09 February 2026 03:34:41 +0000 (0:00:00.285) 0:00:27.794 ******* 2026-02-09 03:34:43.532947 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:34:43.532953 | orchestrator | 2026-02-09 03:34:43.532960 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:43.532967 | orchestrator | Monday 09 February 2026 03:34:41 +0000 (0:00:00.259) 0:00:28.054 ******* 2026-02-09 03:34:43.532973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-09 03:34:43.532980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-09 03:34:43.532987 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-09 03:34:43.532994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-09 03:34:43.533001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-09 03:34:43.533008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-09 03:34:43.533014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-09 03:34:43.533038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-09 03:34:43.533045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-09 03:34:43.533051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-09 03:34:43.533057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-09 03:34:43.533062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-09 03:34:43.533068 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-09 03:34:43.533073 | orchestrator | 2026-02-09 03:34:43.533080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:43.533086 | orchestrator | Monday 09 February 2026 03:34:41 +0000 (0:00:00.435) 0:00:28.489 ******* 2026-02-09 03:34:43.533091 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:43.533096 | orchestrator | 2026-02-09 03:34:43.533102 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:43.533108 | orchestrator | Monday 09 February 2026 03:34:41 +0000 (0:00:00.217) 0:00:28.707 ******* 2026-02-09 03:34:43.533114 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:43.533120 | orchestrator | 2026-02-09 03:34:43.533127 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:43.533133 | orchestrator | Monday 09 February 2026 03:34:42 +0000 (0:00:00.710) 0:00:29.417 ******* 2026-02-09 03:34:43.533138 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:43.533145 | orchestrator | 2026-02-09 03:34:43.533151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:43.533157 | orchestrator | Monday 09 February 2026 03:34:42 +0000 (0:00:00.219) 0:00:29.637 ******* 2026-02-09 03:34:43.533162 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:43.533168 | orchestrator | 2026-02-09 03:34:43.533174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:43.533180 | orchestrator | Monday 09 February 2026 03:34:43 +0000 (0:00:00.231) 0:00:29.868 ******* 2026-02-09 03:34:43.533194 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:43.533200 | orchestrator | 2026-02-09 03:34:43.533205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:43.533211 | orchestrator | Monday 09 February 2026 03:34:43 +0000 (0:00:00.205) 0:00:30.074 ******* 2026-02-09 03:34:43.533218 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:43.533224 | orchestrator | 2026-02-09 03:34:43.533239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:54.387492 | orchestrator | Monday 09 February 2026 03:34:43 +0000 (0:00:00.234) 0:00:30.309 ******* 2026-02-09 03:34:54.387619 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.387640 | orchestrator | 2026-02-09 03:34:54.387657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:54.387672 | orchestrator | Monday 09 February 2026 03:34:43 +0000 (0:00:00.247) 0:00:30.556 ******* 2026-02-09 03:34:54.387687 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.387701 | orchestrator | 2026-02-09 03:34:54.387715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:54.387730 | orchestrator | Monday 09 February 2026 03:34:43 +0000 (0:00:00.213) 0:00:30.769 ******* 2026-02-09 03:34:54.387744 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd) 2026-02-09 03:34:54.387761 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd) 2026-02-09 03:34:54.387776 | orchestrator | 2026-02-09 03:34:54.387809 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:54.387824 | orchestrator | Monday 09 February 2026 03:34:44 +0000 (0:00:00.451) 0:00:31.221 ******* 2026-02-09 03:34:54.387837 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509) 2026-02-09 03:34:54.387851 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509) 2026-02-09 03:34:54.387865 | orchestrator | 2026-02-09 03:34:54.387880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:54.387894 | orchestrator | Monday 09 February 2026 03:34:44 +0000 (0:00:00.456) 0:00:31.677 ******* 2026-02-09 03:34:54.387908 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c) 2026-02-09 03:34:54.387923 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c) 2026-02-09 03:34:54.387939 | orchestrator | 2026-02-09 03:34:54.387954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:54.387968 | orchestrator | Monday 09 February 2026 03:34:45 +0000 (0:00:00.405) 0:00:32.083 ******* 2026-02-09 03:34:54.387983 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24) 2026-02-09 03:34:54.387999 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24) 2026-02-09 03:34:54.388070 | orchestrator | 2026-02-09 03:34:54.388083 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:34:54.388093 | orchestrator | Monday 09 February 2026 03:34:45 +0000 (0:00:00.609) 0:00:32.693 ******* 2026-02-09 03:34:54.388103 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-09 03:34:54.388112 | orchestrator | 2026-02-09 03:34:54.388121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388129 | orchestrator | Monday 09 February 2026 03:34:46 +0000 (0:00:00.530) 0:00:33.224 ******* 2026-02-09 03:34:54.388138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-09 03:34:54.388147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-09 03:34:54.388156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-09 03:34:54.388190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-09 03:34:54.388204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-09 03:34:54.388218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-09 03:34:54.388232 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-09 03:34:54.388246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-09 03:34:54.388260 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-09 03:34:54.388272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-09 03:34:54.388285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-09 03:34:54.388299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-09 03:34:54.388314 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-09 03:34:54.388328 | orchestrator | 2026-02-09 03:34:54.388343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388357 | orchestrator | Monday 09 February 2026 03:34:47 +0000 (0:00:00.748) 0:00:33.972 ******* 2026-02-09 03:34:54.388366 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388374 | orchestrator | 2026-02-09 03:34:54.388383 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388391 | orchestrator | Monday 09 February 2026 03:34:47 +0000 (0:00:00.201) 0:00:34.174 ******* 2026-02-09 03:34:54.388400 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388408 | orchestrator | 2026-02-09 03:34:54.388417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388425 | orchestrator | Monday 09 February 2026 03:34:47 +0000 (0:00:00.205) 0:00:34.380 ******* 2026-02-09 03:34:54.388434 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388443 | orchestrator | 2026-02-09 03:34:54.388470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388480 | orchestrator | Monday 09 February 2026 03:34:47 +0000 (0:00:00.234) 0:00:34.615 ******* 2026-02-09 03:34:54.388488 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388497 | orchestrator | 2026-02-09 03:34:54.388505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388513 | orchestrator | Monday 09 February 2026 03:34:48 +0000 (0:00:00.208) 0:00:34.824 ******* 2026-02-09 03:34:54.388522 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388530 | orchestrator | 2026-02-09 03:34:54.388538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388548 | orchestrator | Monday 09 February 2026 03:34:48 +0000 (0:00:00.207) 0:00:35.031 ******* 2026-02-09 03:34:54.388556 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388565 | orchestrator | 2026-02-09 03:34:54.388573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388581 | orchestrator | Monday 09 February 2026 03:34:48 +0000 (0:00:00.208) 0:00:35.239 ******* 2026-02-09 03:34:54.388599 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388608 | orchestrator | 2026-02-09 03:34:54.388616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388625 | orchestrator | Monday 09 February 2026 03:34:48 +0000 (0:00:00.192) 0:00:35.432 ******* 2026-02-09 03:34:54.388633 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388642 | orchestrator | 2026-02-09 03:34:54.388650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388658 | orchestrator | Monday 09 February 2026 03:34:48 +0000 (0:00:00.206) 0:00:35.638 ******* 2026-02-09 03:34:54.388666 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-09 03:34:54.388683 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-09 03:34:54.388692 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-09 03:34:54.388701 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-09 03:34:54.388709 | orchestrator | 2026-02-09 03:34:54.388718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388726 | orchestrator | Monday 09 February 2026 03:34:49 +0000 (0:00:00.794) 0:00:36.433 ******* 2026-02-09 03:34:54.388734 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388743 | orchestrator | 2026-02-09 03:34:54.388751 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388760 | orchestrator | Monday 09 February 2026 03:34:50 +0000 (0:00:00.495) 0:00:36.929 ******* 2026-02-09 03:34:54.388768 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388776 | orchestrator | 2026-02-09 03:34:54.388785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388793 | orchestrator | Monday 09 February 2026 03:34:50 +0000 (0:00:00.228) 0:00:37.158 ******* 2026-02-09 03:34:54.388802 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388810 | orchestrator | 2026-02-09 03:34:54.388818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:34:54.388827 | orchestrator | Monday 09 February 2026 03:34:50 +0000 (0:00:00.198) 0:00:37.356 ******* 2026-02-09 03:34:54.388835 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388844 | orchestrator | 2026-02-09 03:34:54.388854 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-09 03:34:54.388868 | orchestrator | Monday 09 February 2026 03:34:50 +0000 (0:00:00.210) 0:00:37.567 ******* 2026-02-09 03:34:54.388882 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.388897 | orchestrator | 2026-02-09 03:34:54.388911 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-09 03:34:54.388925 | orchestrator | Monday 09 February 2026 03:34:50 +0000 (0:00:00.151) 0:00:37.719 ******* 2026-02-09 03:34:54.388939 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2c0211a0-e551-5710-9a38-56737a7f5fb3'}}) 2026-02-09 03:34:54.388954 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}}) 2026-02-09 03:34:54.388968 | orchestrator | 2026-02-09 03:34:54.388982 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-09 03:34:54.388995 | orchestrator | Monday 09 February 2026 03:34:51 +0000 (0:00:00.198) 0:00:37.917 ******* 2026-02-09 03:34:54.389035 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}) 2026-02-09 03:34:54.389051 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}) 2026-02-09 03:34:54.389065 | orchestrator | 2026-02-09 03:34:54.389079 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-09 03:34:54.389093 | orchestrator | Monday 09 February 2026 03:34:52 +0000 (0:00:01.814) 0:00:39.731 ******* 2026-02-09 03:34:54.389107 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:34:54.389123 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:34:54.389138 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:34:54.389151 | orchestrator | 2026-02-09 03:34:54.389165 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-09 03:34:54.389179 | orchestrator | Monday 09 February 2026 03:34:53 +0000 (0:00:00.152) 0:00:39.884 ******* 2026-02-09 03:34:54.389192 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}) 2026-02-09 03:34:54.389224 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}) 2026-02-09 03:35:00.241176 | orchestrator | 2026-02-09 03:35:00.241252 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-09 03:35:00.241260 | orchestrator | Monday 09 February 2026 03:34:54 +0000 (0:00:01.276) 0:00:41.160 ******* 2026-02-09 03:35:00.241266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:00.241272 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:00.241277 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241282 | orchestrator | 2026-02-09 03:35:00.241297 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-09 03:35:00.241301 | orchestrator | Monday 09 February 2026 03:34:54 +0000 (0:00:00.150) 0:00:41.311 ******* 2026-02-09 03:35:00.241305 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241309 | orchestrator | 2026-02-09 03:35:00.241313 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-09 03:35:00.241317 | orchestrator | Monday 09 February 2026 03:34:54 +0000 (0:00:00.147) 0:00:41.458 ******* 2026-02-09 03:35:00.241322 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:00.241326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:00.241330 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241334 | orchestrator | 2026-02-09 03:35:00.241338 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-09 03:35:00.241342 | orchestrator | Monday 09 February 2026 03:34:54 +0000 (0:00:00.148) 0:00:41.607 ******* 2026-02-09 03:35:00.241346 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241350 | orchestrator | 2026-02-09 03:35:00.241354 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-09 03:35:00.241358 | orchestrator | Monday 09 February 2026 03:34:54 +0000 (0:00:00.132) 0:00:41.740 ******* 2026-02-09 03:35:00.241362 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:00.241366 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:00.241370 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241375 | orchestrator | 2026-02-09 03:35:00.241379 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-09 03:35:00.241383 | orchestrator | Monday 09 February 2026 03:34:55 +0000 (0:00:00.327) 0:00:42.067 ******* 2026-02-09 03:35:00.241387 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241391 | orchestrator | 2026-02-09 03:35:00.241395 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-09 03:35:00.241399 | orchestrator | Monday 09 February 2026 03:34:55 +0000 (0:00:00.131) 0:00:42.199 ******* 2026-02-09 03:35:00.241403 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:00.241407 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:00.241411 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241415 | orchestrator | 2026-02-09 03:35:00.241419 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-09 03:35:00.241433 | orchestrator | Monday 09 February 2026 03:34:55 +0000 (0:00:00.189) 0:00:42.389 ******* 2026-02-09 03:35:00.241437 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:35:00.241442 | orchestrator | 2026-02-09 03:35:00.241446 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-09 03:35:00.241450 | orchestrator | Monday 09 February 2026 03:34:55 +0000 (0:00:00.148) 0:00:42.537 ******* 2026-02-09 03:35:00.241454 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:00.241458 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:00.241462 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241466 | orchestrator | 2026-02-09 03:35:00.241470 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-09 03:35:00.241474 | orchestrator | Monday 09 February 2026 03:34:55 +0000 (0:00:00.161) 0:00:42.698 ******* 2026-02-09 03:35:00.241478 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:00.241482 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:00.241486 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241490 | orchestrator | 2026-02-09 03:35:00.241494 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-09 03:35:00.241508 | orchestrator | Monday 09 February 2026 03:34:56 +0000 (0:00:00.174) 0:00:42.873 ******* 2026-02-09 03:35:00.241513 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:00.241517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:00.241521 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241525 | orchestrator | 2026-02-09 03:35:00.241529 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-09 03:35:00.241533 | orchestrator | Monday 09 February 2026 03:34:56 +0000 (0:00:00.166) 0:00:43.039 ******* 2026-02-09 03:35:00.241540 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241544 | orchestrator | 2026-02-09 03:35:00.241548 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-09 03:35:00.241552 | orchestrator | Monday 09 February 2026 03:34:56 +0000 (0:00:00.152) 0:00:43.192 ******* 2026-02-09 03:35:00.241556 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241559 | orchestrator | 2026-02-09 03:35:00.241563 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-09 03:35:00.241567 | orchestrator | Monday 09 February 2026 03:34:56 +0000 (0:00:00.153) 0:00:43.345 ******* 2026-02-09 03:35:00.241571 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241575 | orchestrator | 2026-02-09 03:35:00.241579 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-09 03:35:00.241583 | orchestrator | Monday 09 February 2026 03:34:56 +0000 (0:00:00.166) 0:00:43.511 ******* 2026-02-09 03:35:00.241587 | orchestrator | ok: [testbed-node-4] => { 2026-02-09 03:35:00.241591 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-09 03:35:00.241596 | orchestrator | } 2026-02-09 03:35:00.241600 | orchestrator | 2026-02-09 03:35:00.241604 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-09 03:35:00.241609 | orchestrator | Monday 09 February 2026 03:34:56 +0000 (0:00:00.133) 0:00:43.645 ******* 2026-02-09 03:35:00.241613 | orchestrator | ok: [testbed-node-4] => { 2026-02-09 03:35:00.241616 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-09 03:35:00.241624 | orchestrator | } 2026-02-09 03:35:00.241628 | orchestrator | 2026-02-09 03:35:00.241632 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-09 03:35:00.241636 | orchestrator | Monday 09 February 2026 03:34:57 +0000 (0:00:00.164) 0:00:43.810 ******* 2026-02-09 03:35:00.241640 | orchestrator | ok: [testbed-node-4] => { 2026-02-09 03:35:00.241644 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-09 03:35:00.241648 | orchestrator | } 2026-02-09 03:35:00.241652 | orchestrator | 2026-02-09 03:35:00.241656 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-09 03:35:00.241660 | orchestrator | Monday 09 February 2026 03:34:57 +0000 (0:00:00.400) 0:00:44.211 ******* 2026-02-09 03:35:00.241664 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:35:00.241668 | orchestrator | 2026-02-09 03:35:00.241672 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-09 03:35:00.241676 | orchestrator | Monday 09 February 2026 03:34:57 +0000 (0:00:00.563) 0:00:44.775 ******* 2026-02-09 03:35:00.241680 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:35:00.241683 | orchestrator | 2026-02-09 03:35:00.241687 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-09 03:35:00.241692 | orchestrator | Monday 09 February 2026 03:34:58 +0000 (0:00:00.530) 0:00:45.305 ******* 2026-02-09 03:35:00.241695 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:35:00.241699 | orchestrator | 2026-02-09 03:35:00.241703 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-09 03:35:00.241707 | orchestrator | Monday 09 February 2026 03:34:59 +0000 (0:00:00.534) 0:00:45.840 ******* 2026-02-09 03:35:00.241711 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:35:00.241715 | orchestrator | 2026-02-09 03:35:00.241719 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-09 03:35:00.241723 | orchestrator | Monday 09 February 2026 03:34:59 +0000 (0:00:00.173) 0:00:46.013 ******* 2026-02-09 03:35:00.241727 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241731 | orchestrator | 2026-02-09 03:35:00.241735 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-09 03:35:00.241739 | orchestrator | Monday 09 February 2026 03:34:59 +0000 (0:00:00.149) 0:00:46.162 ******* 2026-02-09 03:35:00.241743 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241746 | orchestrator | 2026-02-09 03:35:00.241750 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-09 03:35:00.241754 | orchestrator | Monday 09 February 2026 03:34:59 +0000 (0:00:00.106) 0:00:46.268 ******* 2026-02-09 03:35:00.241758 | orchestrator | ok: [testbed-node-4] => { 2026-02-09 03:35:00.241762 | orchestrator |  "vgs_report": { 2026-02-09 03:35:00.241766 | orchestrator |  "vg": [] 2026-02-09 03:35:00.241771 | orchestrator |  } 2026-02-09 03:35:00.241775 | orchestrator | } 2026-02-09 03:35:00.241779 | orchestrator | 2026-02-09 03:35:00.241783 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-09 03:35:00.241787 | orchestrator | Monday 09 February 2026 03:34:59 +0000 (0:00:00.155) 0:00:46.424 ******* 2026-02-09 03:35:00.241791 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241795 | orchestrator | 2026-02-09 03:35:00.241799 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-09 03:35:00.241803 | orchestrator | Monday 09 February 2026 03:34:59 +0000 (0:00:00.135) 0:00:46.560 ******* 2026-02-09 03:35:00.241807 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241810 | orchestrator | 2026-02-09 03:35:00.241814 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-09 03:35:00.241819 | orchestrator | Monday 09 February 2026 03:34:59 +0000 (0:00:00.147) 0:00:46.707 ******* 2026-02-09 03:35:00.241826 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241832 | orchestrator | 2026-02-09 03:35:00.241838 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-09 03:35:00.241844 | orchestrator | Monday 09 February 2026 03:35:00 +0000 (0:00:00.144) 0:00:46.852 ******* 2026-02-09 03:35:00.241855 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:00.241861 | orchestrator | 2026-02-09 03:35:00.241871 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-09 03:35:05.352123 | orchestrator | Monday 09 February 2026 03:35:00 +0000 (0:00:00.162) 0:00:47.014 ******* 2026-02-09 03:35:05.352250 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352263 | orchestrator | 2026-02-09 03:35:05.352272 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-09 03:35:05.352279 | orchestrator | Monday 09 February 2026 03:35:00 +0000 (0:00:00.383) 0:00:47.397 ******* 2026-02-09 03:35:05.352286 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352292 | orchestrator | 2026-02-09 03:35:05.352298 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-09 03:35:05.352342 | orchestrator | Monday 09 February 2026 03:35:00 +0000 (0:00:00.144) 0:00:47.542 ******* 2026-02-09 03:35:05.352350 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352357 | orchestrator | 2026-02-09 03:35:05.352379 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-09 03:35:05.352386 | orchestrator | Monday 09 February 2026 03:35:00 +0000 (0:00:00.164) 0:00:47.707 ******* 2026-02-09 03:35:05.352393 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352399 | orchestrator | 2026-02-09 03:35:05.352406 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-09 03:35:05.352413 | orchestrator | Monday 09 February 2026 03:35:01 +0000 (0:00:00.158) 0:00:47.865 ******* 2026-02-09 03:35:05.352421 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352427 | orchestrator | 2026-02-09 03:35:05.352434 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-09 03:35:05.352440 | orchestrator | Monday 09 February 2026 03:35:01 +0000 (0:00:00.151) 0:00:48.016 ******* 2026-02-09 03:35:05.352447 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352453 | orchestrator | 2026-02-09 03:35:05.352459 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-09 03:35:05.352466 | orchestrator | Monday 09 February 2026 03:35:01 +0000 (0:00:00.148) 0:00:48.164 ******* 2026-02-09 03:35:05.352473 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352480 | orchestrator | 2026-02-09 03:35:05.352486 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-09 03:35:05.352493 | orchestrator | Monday 09 February 2026 03:35:01 +0000 (0:00:00.135) 0:00:48.300 ******* 2026-02-09 03:35:05.352499 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352506 | orchestrator | 2026-02-09 03:35:05.352512 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-09 03:35:05.352519 | orchestrator | Monday 09 February 2026 03:35:01 +0000 (0:00:00.144) 0:00:48.444 ******* 2026-02-09 03:35:05.352526 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352533 | orchestrator | 2026-02-09 03:35:05.352540 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-09 03:35:05.352546 | orchestrator | Monday 09 February 2026 03:35:01 +0000 (0:00:00.136) 0:00:48.581 ******* 2026-02-09 03:35:05.352553 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352560 | orchestrator | 2026-02-09 03:35:05.352567 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-09 03:35:05.352574 | orchestrator | Monday 09 February 2026 03:35:01 +0000 (0:00:00.154) 0:00:48.735 ******* 2026-02-09 03:35:05.352582 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:05.352591 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:05.352600 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352607 | orchestrator | 2026-02-09 03:35:05.352613 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-09 03:35:05.352641 | orchestrator | Monday 09 February 2026 03:35:02 +0000 (0:00:00.168) 0:00:48.904 ******* 2026-02-09 03:35:05.352648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:05.352655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:05.352662 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352668 | orchestrator | 2026-02-09 03:35:05.352675 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-09 03:35:05.352682 | orchestrator | Monday 09 February 2026 03:35:02 +0000 (0:00:00.158) 0:00:49.062 ******* 2026-02-09 03:35:05.352689 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:05.352697 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:05.352703 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352710 | orchestrator | 2026-02-09 03:35:05.352716 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-09 03:35:05.352723 | orchestrator | Monday 09 February 2026 03:35:02 +0000 (0:00:00.442) 0:00:49.505 ******* 2026-02-09 03:35:05.352731 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:05.352738 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:05.352745 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352753 | orchestrator | 2026-02-09 03:35:05.352779 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-09 03:35:05.352787 | orchestrator | Monday 09 February 2026 03:35:02 +0000 (0:00:00.172) 0:00:49.677 ******* 2026-02-09 03:35:05.352794 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:05.352800 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:05.352806 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352814 | orchestrator | 2026-02-09 03:35:05.352827 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-09 03:35:05.352835 | orchestrator | Monday 09 February 2026 03:35:03 +0000 (0:00:00.161) 0:00:49.839 ******* 2026-02-09 03:35:05.352842 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:05.352849 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:05.352856 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352863 | orchestrator | 2026-02-09 03:35:05.352869 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-09 03:35:05.352875 | orchestrator | Monday 09 February 2026 03:35:03 +0000 (0:00:00.153) 0:00:49.993 ******* 2026-02-09 03:35:05.352882 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:05.352888 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:05.352894 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352910 | orchestrator | 2026-02-09 03:35:05.352916 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-09 03:35:05.352922 | orchestrator | Monday 09 February 2026 03:35:03 +0000 (0:00:00.166) 0:00:50.160 ******* 2026-02-09 03:35:05.352928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:05.352935 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:05.352941 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.352947 | orchestrator | 2026-02-09 03:35:05.352953 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-09 03:35:05.352960 | orchestrator | Monday 09 February 2026 03:35:03 +0000 (0:00:00.176) 0:00:50.337 ******* 2026-02-09 03:35:05.352966 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:35:05.352973 | orchestrator | 2026-02-09 03:35:05.352979 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-09 03:35:05.352988 | orchestrator | Monday 09 February 2026 03:35:04 +0000 (0:00:00.526) 0:00:50.863 ******* 2026-02-09 03:35:05.352997 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:35:05.353031 | orchestrator | 2026-02-09 03:35:05.353038 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-09 03:35:05.353044 | orchestrator | Monday 09 February 2026 03:35:04 +0000 (0:00:00.556) 0:00:51.420 ******* 2026-02-09 03:35:05.353050 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:35:05.353055 | orchestrator | 2026-02-09 03:35:05.353061 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-09 03:35:05.353068 | orchestrator | Monday 09 February 2026 03:35:04 +0000 (0:00:00.162) 0:00:51.582 ******* 2026-02-09 03:35:05.353075 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'vg_name': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}) 2026-02-09 03:35:05.353084 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'vg_name': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}) 2026-02-09 03:35:05.353090 | orchestrator | 2026-02-09 03:35:05.353096 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-09 03:35:05.353103 | orchestrator | Monday 09 February 2026 03:35:04 +0000 (0:00:00.181) 0:00:51.764 ******* 2026-02-09 03:35:05.353110 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:05.353116 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:05.353123 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:05.353129 | orchestrator | 2026-02-09 03:35:05.353136 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-09 03:35:05.353142 | orchestrator | Monday 09 February 2026 03:35:05 +0000 (0:00:00.164) 0:00:51.929 ******* 2026-02-09 03:35:05.353148 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:05.353165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:13.261597 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:13.261703 | orchestrator | 2026-02-09 03:35:13.261720 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-09 03:35:13.261733 | orchestrator | Monday 09 February 2026 03:35:05 +0000 (0:00:00.198) 0:00:52.127 ******* 2026-02-09 03:35:13.261745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 03:35:13.261815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 03:35:13.261829 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:13.261861 | orchestrator | 2026-02-09 03:35:13.261879 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-09 03:35:13.261897 | orchestrator | Monday 09 February 2026 03:35:05 +0000 (0:00:00.411) 0:00:52.538 ******* 2026-02-09 03:35:13.261916 | orchestrator | ok: [testbed-node-4] => { 2026-02-09 03:35:13.261936 | orchestrator |  "lvm_report": { 2026-02-09 03:35:13.261954 | orchestrator |  "lv": [ 2026-02-09 03:35:13.261965 | orchestrator |  { 2026-02-09 03:35:13.261976 | orchestrator |  "lv_name": "osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3", 2026-02-09 03:35:13.261988 | orchestrator |  "vg_name": "ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3" 2026-02-09 03:35:13.262090 | orchestrator |  }, 2026-02-09 03:35:13.262119 | orchestrator |  { 2026-02-09 03:35:13.262132 | orchestrator |  "lv_name": "osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9", 2026-02-09 03:35:13.262144 | orchestrator |  "vg_name": "ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9" 2026-02-09 03:35:13.262168 | orchestrator |  } 2026-02-09 03:35:13.262180 | orchestrator |  ], 2026-02-09 03:35:13.262192 | orchestrator |  "pv": [ 2026-02-09 03:35:13.262205 | orchestrator |  { 2026-02-09 03:35:13.262217 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-09 03:35:13.262230 | orchestrator |  "vg_name": "ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3" 2026-02-09 03:35:13.262243 | orchestrator |  }, 2026-02-09 03:35:13.262256 | orchestrator |  { 2026-02-09 03:35:13.262269 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-09 03:35:13.262282 | orchestrator |  "vg_name": "ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9" 2026-02-09 03:35:13.262294 | orchestrator |  } 2026-02-09 03:35:13.262307 | orchestrator |  ] 2026-02-09 03:35:13.262319 | orchestrator |  } 2026-02-09 03:35:13.262331 | orchestrator | } 2026-02-09 03:35:13.262343 | orchestrator | 2026-02-09 03:35:13.262391 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-09 03:35:13.262404 | orchestrator | 2026-02-09 03:35:13.262416 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-09 03:35:13.262429 | orchestrator | Monday 09 February 2026 03:35:06 +0000 (0:00:00.317) 0:00:52.856 ******* 2026-02-09 03:35:13.262442 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-09 03:35:13.262455 | orchestrator | 2026-02-09 03:35:13.262467 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-09 03:35:13.262477 | orchestrator | Monday 09 February 2026 03:35:06 +0000 (0:00:00.276) 0:00:53.132 ******* 2026-02-09 03:35:13.262488 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:35:13.262499 | orchestrator | 2026-02-09 03:35:13.262510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.262520 | orchestrator | Monday 09 February 2026 03:35:06 +0000 (0:00:00.270) 0:00:53.403 ******* 2026-02-09 03:35:13.262534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-09 03:35:13.262552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-09 03:35:13.262570 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-09 03:35:13.262588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-09 03:35:13.262605 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-09 03:35:13.262622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-09 03:35:13.262639 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-09 03:35:13.262678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-09 03:35:13.262699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-09 03:35:13.262711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-09 03:35:13.262722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-09 03:35:13.262732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-09 03:35:13.262743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-09 03:35:13.262753 | orchestrator | 2026-02-09 03:35:13.262764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.262775 | orchestrator | Monday 09 February 2026 03:35:07 +0000 (0:00:00.434) 0:00:53.838 ******* 2026-02-09 03:35:13.262786 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:13.262797 | orchestrator | 2026-02-09 03:35:13.262807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.262818 | orchestrator | Monday 09 February 2026 03:35:07 +0000 (0:00:00.242) 0:00:54.080 ******* 2026-02-09 03:35:13.262829 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:13.262839 | orchestrator | 2026-02-09 03:35:13.262850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.262882 | orchestrator | Monday 09 February 2026 03:35:07 +0000 (0:00:00.223) 0:00:54.304 ******* 2026-02-09 03:35:13.262894 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:13.262905 | orchestrator | 2026-02-09 03:35:13.262915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.262926 | orchestrator | Monday 09 February 2026 03:35:07 +0000 (0:00:00.276) 0:00:54.581 ******* 2026-02-09 03:35:13.262937 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:13.262948 | orchestrator | 2026-02-09 03:35:13.262958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.262969 | orchestrator | Monday 09 February 2026 03:35:08 +0000 (0:00:00.974) 0:00:55.556 ******* 2026-02-09 03:35:13.262980 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:13.262991 | orchestrator | 2026-02-09 03:35:13.263037 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.263055 | orchestrator | Monday 09 February 2026 03:35:09 +0000 (0:00:00.274) 0:00:55.831 ******* 2026-02-09 03:35:13.263066 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:13.263077 | orchestrator | 2026-02-09 03:35:13.263088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.263099 | orchestrator | Monday 09 February 2026 03:35:09 +0000 (0:00:00.352) 0:00:56.183 ******* 2026-02-09 03:35:13.263109 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:13.263120 | orchestrator | 2026-02-09 03:35:13.263131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.263142 | orchestrator | Monday 09 February 2026 03:35:09 +0000 (0:00:00.261) 0:00:56.444 ******* 2026-02-09 03:35:13.263152 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:13.263163 | orchestrator | 2026-02-09 03:35:13.263173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.263184 | orchestrator | Monday 09 February 2026 03:35:09 +0000 (0:00:00.275) 0:00:56.720 ******* 2026-02-09 03:35:13.263195 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d) 2026-02-09 03:35:13.263207 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d) 2026-02-09 03:35:13.263218 | orchestrator | 2026-02-09 03:35:13.263228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.263239 | orchestrator | Monday 09 February 2026 03:35:10 +0000 (0:00:00.544) 0:00:57.264 ******* 2026-02-09 03:35:13.263336 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717) 2026-02-09 03:35:13.263367 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717) 2026-02-09 03:35:13.263384 | orchestrator | 2026-02-09 03:35:13.263402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.263420 | orchestrator | Monday 09 February 2026 03:35:10 +0000 (0:00:00.486) 0:00:57.751 ******* 2026-02-09 03:35:13.263437 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46) 2026-02-09 03:35:13.263455 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46) 2026-02-09 03:35:13.263474 | orchestrator | 2026-02-09 03:35:13.263494 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.263506 | orchestrator | Monday 09 February 2026 03:35:11 +0000 (0:00:00.586) 0:00:58.338 ******* 2026-02-09 03:35:13.263517 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0) 2026-02-09 03:35:13.263528 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0) 2026-02-09 03:35:13.263539 | orchestrator | 2026-02-09 03:35:13.263550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-09 03:35:13.263561 | orchestrator | Monday 09 February 2026 03:35:12 +0000 (0:00:00.531) 0:00:58.870 ******* 2026-02-09 03:35:13.263572 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-09 03:35:13.263583 | orchestrator | 2026-02-09 03:35:13.263594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:13.263605 | orchestrator | Monday 09 February 2026 03:35:12 +0000 (0:00:00.427) 0:00:59.297 ******* 2026-02-09 03:35:13.263616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-09 03:35:13.263627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-09 03:35:13.263637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-09 03:35:13.263648 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-09 03:35:13.263659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-09 03:35:13.263669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-09 03:35:13.263680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-09 03:35:13.263691 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-09 03:35:13.263702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-09 03:35:13.263712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-09 03:35:13.263723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-09 03:35:13.263745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-09 03:35:22.830585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-09 03:35:22.830666 | orchestrator | 2026-02-09 03:35:22.830674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830680 | orchestrator | Monday 09 February 2026 03:35:13 +0000 (0:00:00.732) 0:01:00.029 ******* 2026-02-09 03:35:22.830686 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830691 | orchestrator | 2026-02-09 03:35:22.830696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830713 | orchestrator | Monday 09 February 2026 03:35:13 +0000 (0:00:00.235) 0:01:00.265 ******* 2026-02-09 03:35:22.830718 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830739 | orchestrator | 2026-02-09 03:35:22.830744 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830748 | orchestrator | Monday 09 February 2026 03:35:13 +0000 (0:00:00.228) 0:01:00.493 ******* 2026-02-09 03:35:22.830753 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830757 | orchestrator | 2026-02-09 03:35:22.830762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830766 | orchestrator | Monday 09 February 2026 03:35:13 +0000 (0:00:00.227) 0:01:00.721 ******* 2026-02-09 03:35:22.830771 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830776 | orchestrator | 2026-02-09 03:35:22.830780 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830785 | orchestrator | Monday 09 February 2026 03:35:14 +0000 (0:00:00.269) 0:01:00.990 ******* 2026-02-09 03:35:22.830789 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830794 | orchestrator | 2026-02-09 03:35:22.830798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830803 | orchestrator | Monday 09 February 2026 03:35:14 +0000 (0:00:00.252) 0:01:01.243 ******* 2026-02-09 03:35:22.830807 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830812 | orchestrator | 2026-02-09 03:35:22.830816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830821 | orchestrator | Monday 09 February 2026 03:35:14 +0000 (0:00:00.234) 0:01:01.477 ******* 2026-02-09 03:35:22.830825 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830830 | orchestrator | 2026-02-09 03:35:22.830834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830839 | orchestrator | Monday 09 February 2026 03:35:14 +0000 (0:00:00.225) 0:01:01.703 ******* 2026-02-09 03:35:22.830844 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830848 | orchestrator | 2026-02-09 03:35:22.830853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830857 | orchestrator | Monday 09 February 2026 03:35:15 +0000 (0:00:00.223) 0:01:01.926 ******* 2026-02-09 03:35:22.830862 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-09 03:35:22.830867 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-09 03:35:22.830872 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-09 03:35:22.830877 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-09 03:35:22.830881 | orchestrator | 2026-02-09 03:35:22.830886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830890 | orchestrator | Monday 09 February 2026 03:35:16 +0000 (0:00:01.105) 0:01:03.032 ******* 2026-02-09 03:35:22.830895 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830899 | orchestrator | 2026-02-09 03:35:22.830904 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830908 | orchestrator | Monday 09 February 2026 03:35:17 +0000 (0:00:00.877) 0:01:03.910 ******* 2026-02-09 03:35:22.830913 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830917 | orchestrator | 2026-02-09 03:35:22.830922 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830927 | orchestrator | Monday 09 February 2026 03:35:17 +0000 (0:00:00.239) 0:01:04.149 ******* 2026-02-09 03:35:22.830931 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830936 | orchestrator | 2026-02-09 03:35:22.830940 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-09 03:35:22.830945 | orchestrator | Monday 09 February 2026 03:35:17 +0000 (0:00:00.214) 0:01:04.363 ******* 2026-02-09 03:35:22.830949 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830954 | orchestrator | 2026-02-09 03:35:22.830958 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-09 03:35:22.830963 | orchestrator | Monday 09 February 2026 03:35:17 +0000 (0:00:00.189) 0:01:04.553 ******* 2026-02-09 03:35:22.830967 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.830972 | orchestrator | 2026-02-09 03:35:22.830980 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-09 03:35:22.830985 | orchestrator | Monday 09 February 2026 03:35:17 +0000 (0:00:00.138) 0:01:04.692 ******* 2026-02-09 03:35:22.830990 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '46be6a4f-1579-5910-a72e-9190b5238c92'}}) 2026-02-09 03:35:22.831011 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fca1079b-480c-5ada-8652-888828a580b6'}}) 2026-02-09 03:35:22.831016 | orchestrator | 2026-02-09 03:35:22.831021 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-09 03:35:22.831025 | orchestrator | Monday 09 February 2026 03:35:18 +0000 (0:00:00.200) 0:01:04.893 ******* 2026-02-09 03:35:22.831031 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}) 2026-02-09 03:35:22.831037 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}) 2026-02-09 03:35:22.831041 | orchestrator | 2026-02-09 03:35:22.831046 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-09 03:35:22.831062 | orchestrator | Monday 09 February 2026 03:35:19 +0000 (0:00:01.759) 0:01:06.653 ******* 2026-02-09 03:35:22.831067 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:22.831073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:22.831078 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.831082 | orchestrator | 2026-02-09 03:35:22.831090 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-09 03:35:22.831095 | orchestrator | Monday 09 February 2026 03:35:20 +0000 (0:00:00.141) 0:01:06.794 ******* 2026-02-09 03:35:22.831099 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}) 2026-02-09 03:35:22.831104 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}) 2026-02-09 03:35:22.831108 | orchestrator | 2026-02-09 03:35:22.831113 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-09 03:35:22.831118 | orchestrator | Monday 09 February 2026 03:35:21 +0000 (0:00:01.356) 0:01:08.151 ******* 2026-02-09 03:35:22.831124 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:22.831129 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:22.831134 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.831140 | orchestrator | 2026-02-09 03:35:22.831146 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-09 03:35:22.831151 | orchestrator | Monday 09 February 2026 03:35:21 +0000 (0:00:00.142) 0:01:08.294 ******* 2026-02-09 03:35:22.831156 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.831162 | orchestrator | 2026-02-09 03:35:22.831167 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-09 03:35:22.831172 | orchestrator | Monday 09 February 2026 03:35:21 +0000 (0:00:00.136) 0:01:08.430 ******* 2026-02-09 03:35:22.831177 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:22.831183 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:22.831192 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.831198 | orchestrator | 2026-02-09 03:35:22.831203 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-09 03:35:22.831208 | orchestrator | Monday 09 February 2026 03:35:21 +0000 (0:00:00.323) 0:01:08.754 ******* 2026-02-09 03:35:22.831213 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.831218 | orchestrator | 2026-02-09 03:35:22.831224 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-09 03:35:22.831229 | orchestrator | Monday 09 February 2026 03:35:22 +0000 (0:00:00.144) 0:01:08.898 ******* 2026-02-09 03:35:22.831234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:22.831240 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:22.831245 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.831250 | orchestrator | 2026-02-09 03:35:22.831255 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-09 03:35:22.831260 | orchestrator | Monday 09 February 2026 03:35:22 +0000 (0:00:00.131) 0:01:09.029 ******* 2026-02-09 03:35:22.831266 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.831271 | orchestrator | 2026-02-09 03:35:22.831276 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-09 03:35:22.831281 | orchestrator | Monday 09 February 2026 03:35:22 +0000 (0:00:00.125) 0:01:09.155 ******* 2026-02-09 03:35:22.831286 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:22.831292 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:22.831297 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:22.831302 | orchestrator | 2026-02-09 03:35:22.831307 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-09 03:35:22.831312 | orchestrator | Monday 09 February 2026 03:35:22 +0000 (0:00:00.152) 0:01:09.308 ******* 2026-02-09 03:35:22.831317 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:35:22.831323 | orchestrator | 2026-02-09 03:35:22.831328 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-09 03:35:22.831333 | orchestrator | Monday 09 February 2026 03:35:22 +0000 (0:00:00.141) 0:01:09.449 ******* 2026-02-09 03:35:22.831343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:29.260834 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:29.261080 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.261111 | orchestrator | 2026-02-09 03:35:29.261130 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-09 03:35:29.261148 | orchestrator | Monday 09 February 2026 03:35:22 +0000 (0:00:00.159) 0:01:09.608 ******* 2026-02-09 03:35:29.261190 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:29.261210 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:29.261226 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.261243 | orchestrator | 2026-02-09 03:35:29.261261 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-09 03:35:29.261278 | orchestrator | Monday 09 February 2026 03:35:23 +0000 (0:00:00.180) 0:01:09.789 ******* 2026-02-09 03:35:29.261325 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:29.261338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:29.261349 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.261361 | orchestrator | 2026-02-09 03:35:29.261372 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-09 03:35:29.261384 | orchestrator | Monday 09 February 2026 03:35:23 +0000 (0:00:00.169) 0:01:09.959 ******* 2026-02-09 03:35:29.261394 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.261403 | orchestrator | 2026-02-09 03:35:29.261413 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-09 03:35:29.261422 | orchestrator | Monday 09 February 2026 03:35:23 +0000 (0:00:00.151) 0:01:10.111 ******* 2026-02-09 03:35:29.261432 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.261442 | orchestrator | 2026-02-09 03:35:29.261452 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-09 03:35:29.261461 | orchestrator | Monday 09 February 2026 03:35:23 +0000 (0:00:00.137) 0:01:10.248 ******* 2026-02-09 03:35:29.261471 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.261480 | orchestrator | 2026-02-09 03:35:29.261490 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-09 03:35:29.261500 | orchestrator | Monday 09 February 2026 03:35:23 +0000 (0:00:00.307) 0:01:10.556 ******* 2026-02-09 03:35:29.261509 | orchestrator | ok: [testbed-node-5] => { 2026-02-09 03:35:29.261519 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-09 03:35:29.261529 | orchestrator | } 2026-02-09 03:35:29.261540 | orchestrator | 2026-02-09 03:35:29.261549 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-09 03:35:29.261559 | orchestrator | Monday 09 February 2026 03:35:23 +0000 (0:00:00.157) 0:01:10.713 ******* 2026-02-09 03:35:29.261569 | orchestrator | ok: [testbed-node-5] => { 2026-02-09 03:35:29.261578 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-09 03:35:29.261588 | orchestrator | } 2026-02-09 03:35:29.261597 | orchestrator | 2026-02-09 03:35:29.261607 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-09 03:35:29.261616 | orchestrator | Monday 09 February 2026 03:35:24 +0000 (0:00:00.152) 0:01:10.866 ******* 2026-02-09 03:35:29.261626 | orchestrator | ok: [testbed-node-5] => { 2026-02-09 03:35:29.261635 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-09 03:35:29.261645 | orchestrator | } 2026-02-09 03:35:29.261654 | orchestrator | 2026-02-09 03:35:29.261664 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-09 03:35:29.261673 | orchestrator | Monday 09 February 2026 03:35:24 +0000 (0:00:00.134) 0:01:11.000 ******* 2026-02-09 03:35:29.261683 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:35:29.261693 | orchestrator | 2026-02-09 03:35:29.261702 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-09 03:35:29.261712 | orchestrator | Monday 09 February 2026 03:35:24 +0000 (0:00:00.499) 0:01:11.500 ******* 2026-02-09 03:35:29.261721 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:35:29.261731 | orchestrator | 2026-02-09 03:35:29.261740 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-09 03:35:29.261750 | orchestrator | Monday 09 February 2026 03:35:25 +0000 (0:00:00.500) 0:01:12.000 ******* 2026-02-09 03:35:29.261759 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:35:29.261769 | orchestrator | 2026-02-09 03:35:29.261778 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-09 03:35:29.261788 | orchestrator | Monday 09 February 2026 03:35:25 +0000 (0:00:00.501) 0:01:12.502 ******* 2026-02-09 03:35:29.261797 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:35:29.261806 | orchestrator | 2026-02-09 03:35:29.261816 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-09 03:35:29.261833 | orchestrator | Monday 09 February 2026 03:35:25 +0000 (0:00:00.135) 0:01:12.637 ******* 2026-02-09 03:35:29.261842 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.261852 | orchestrator | 2026-02-09 03:35:29.261861 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-09 03:35:29.261871 | orchestrator | Monday 09 February 2026 03:35:26 +0000 (0:00:00.164) 0:01:12.802 ******* 2026-02-09 03:35:29.261880 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.261890 | orchestrator | 2026-02-09 03:35:29.261899 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-09 03:35:29.261909 | orchestrator | Monday 09 February 2026 03:35:26 +0000 (0:00:00.173) 0:01:12.976 ******* 2026-02-09 03:35:29.261919 | orchestrator | ok: [testbed-node-5] => { 2026-02-09 03:35:29.261928 | orchestrator |  "vgs_report": { 2026-02-09 03:35:29.261938 | orchestrator |  "vg": [] 2026-02-09 03:35:29.261968 | orchestrator |  } 2026-02-09 03:35:29.261978 | orchestrator | } 2026-02-09 03:35:29.261988 | orchestrator | 2026-02-09 03:35:29.262104 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-09 03:35:29.262115 | orchestrator | Monday 09 February 2026 03:35:26 +0000 (0:00:00.144) 0:01:13.121 ******* 2026-02-09 03:35:29.262125 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262135 | orchestrator | 2026-02-09 03:35:29.262145 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-09 03:35:29.262154 | orchestrator | Monday 09 February 2026 03:35:26 +0000 (0:00:00.137) 0:01:13.259 ******* 2026-02-09 03:35:29.262174 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262191 | orchestrator | 2026-02-09 03:35:29.262206 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-09 03:35:29.262223 | orchestrator | Monday 09 February 2026 03:35:26 +0000 (0:00:00.327) 0:01:13.587 ******* 2026-02-09 03:35:29.262239 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262255 | orchestrator | 2026-02-09 03:35:29.262272 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-09 03:35:29.262287 | orchestrator | Monday 09 February 2026 03:35:26 +0000 (0:00:00.164) 0:01:13.751 ******* 2026-02-09 03:35:29.262304 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262321 | orchestrator | 2026-02-09 03:35:29.262338 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-09 03:35:29.262354 | orchestrator | Monday 09 February 2026 03:35:27 +0000 (0:00:00.129) 0:01:13.880 ******* 2026-02-09 03:35:29.262371 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262387 | orchestrator | 2026-02-09 03:35:29.262401 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-09 03:35:29.262415 | orchestrator | Monday 09 February 2026 03:35:27 +0000 (0:00:00.139) 0:01:14.019 ******* 2026-02-09 03:35:29.262428 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262442 | orchestrator | 2026-02-09 03:35:29.262456 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-09 03:35:29.262471 | orchestrator | Monday 09 February 2026 03:35:27 +0000 (0:00:00.187) 0:01:14.206 ******* 2026-02-09 03:35:29.262485 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262500 | orchestrator | 2026-02-09 03:35:29.262514 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-09 03:35:29.262529 | orchestrator | Monday 09 February 2026 03:35:27 +0000 (0:00:00.139) 0:01:14.345 ******* 2026-02-09 03:35:29.262546 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262562 | orchestrator | 2026-02-09 03:35:29.262579 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-09 03:35:29.262596 | orchestrator | Monday 09 February 2026 03:35:27 +0000 (0:00:00.137) 0:01:14.483 ******* 2026-02-09 03:35:29.262613 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262629 | orchestrator | 2026-02-09 03:35:29.262646 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-09 03:35:29.262662 | orchestrator | Monday 09 February 2026 03:35:27 +0000 (0:00:00.138) 0:01:14.622 ******* 2026-02-09 03:35:29.262692 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262710 | orchestrator | 2026-02-09 03:35:29.262726 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-09 03:35:29.262743 | orchestrator | Monday 09 February 2026 03:35:27 +0000 (0:00:00.143) 0:01:14.765 ******* 2026-02-09 03:35:29.262759 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262775 | orchestrator | 2026-02-09 03:35:29.262792 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-09 03:35:29.262809 | orchestrator | Monday 09 February 2026 03:35:28 +0000 (0:00:00.165) 0:01:14.930 ******* 2026-02-09 03:35:29.262826 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262843 | orchestrator | 2026-02-09 03:35:29.262860 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-09 03:35:29.262876 | orchestrator | Monday 09 February 2026 03:35:28 +0000 (0:00:00.144) 0:01:15.075 ******* 2026-02-09 03:35:29.262892 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262908 | orchestrator | 2026-02-09 03:35:29.262926 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-09 03:35:29.262942 | orchestrator | Monday 09 February 2026 03:35:28 +0000 (0:00:00.341) 0:01:15.416 ******* 2026-02-09 03:35:29.262958 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.262975 | orchestrator | 2026-02-09 03:35:29.263029 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-09 03:35:29.263047 | orchestrator | Monday 09 February 2026 03:35:28 +0000 (0:00:00.158) 0:01:15.575 ******* 2026-02-09 03:35:29.263064 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:29.263081 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:29.263098 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.263114 | orchestrator | 2026-02-09 03:35:29.263131 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-09 03:35:29.263148 | orchestrator | Monday 09 February 2026 03:35:28 +0000 (0:00:00.144) 0:01:15.720 ******* 2026-02-09 03:35:29.263165 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:29.263181 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:29.263197 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:29.263214 | orchestrator | 2026-02-09 03:35:29.263230 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-09 03:35:29.263248 | orchestrator | Monday 09 February 2026 03:35:29 +0000 (0:00:00.165) 0:01:15.886 ******* 2026-02-09 03:35:29.263281 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:32.145289 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:32.145429 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:32.145445 | orchestrator | 2026-02-09 03:35:32.145479 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-09 03:35:32.145494 | orchestrator | Monday 09 February 2026 03:35:29 +0000 (0:00:00.152) 0:01:16.038 ******* 2026-02-09 03:35:32.145505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:32.145517 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:32.145558 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:32.145570 | orchestrator | 2026-02-09 03:35:32.145581 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-09 03:35:32.145593 | orchestrator | Monday 09 February 2026 03:35:29 +0000 (0:00:00.154) 0:01:16.193 ******* 2026-02-09 03:35:32.145604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:32.145616 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:32.145626 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:32.145637 | orchestrator | 2026-02-09 03:35:32.145648 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-09 03:35:32.145659 | orchestrator | Monday 09 February 2026 03:35:29 +0000 (0:00:00.157) 0:01:16.350 ******* 2026-02-09 03:35:32.145669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:32.145680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:32.145691 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:32.145702 | orchestrator | 2026-02-09 03:35:32.145713 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-09 03:35:32.145724 | orchestrator | Monday 09 February 2026 03:35:29 +0000 (0:00:00.147) 0:01:16.498 ******* 2026-02-09 03:35:32.145735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:32.145747 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:32.145761 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:32.145773 | orchestrator | 2026-02-09 03:35:32.145785 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-09 03:35:32.145798 | orchestrator | Monday 09 February 2026 03:35:29 +0000 (0:00:00.149) 0:01:16.647 ******* 2026-02-09 03:35:32.145810 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:32.145823 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:32.145835 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:32.145848 | orchestrator | 2026-02-09 03:35:32.145862 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-09 03:35:32.145874 | orchestrator | Monday 09 February 2026 03:35:30 +0000 (0:00:00.152) 0:01:16.800 ******* 2026-02-09 03:35:32.145887 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:35:32.145902 | orchestrator | 2026-02-09 03:35:32.145914 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-09 03:35:32.145926 | orchestrator | Monday 09 February 2026 03:35:30 +0000 (0:00:00.524) 0:01:17.324 ******* 2026-02-09 03:35:32.145940 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:35:32.145952 | orchestrator | 2026-02-09 03:35:32.145965 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-09 03:35:32.145978 | orchestrator | Monday 09 February 2026 03:35:31 +0000 (0:00:00.671) 0:01:17.996 ******* 2026-02-09 03:35:32.146089 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:35:32.146106 | orchestrator | 2026-02-09 03:35:32.146121 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-09 03:35:32.146133 | orchestrator | Monday 09 February 2026 03:35:31 +0000 (0:00:00.154) 0:01:18.150 ******* 2026-02-09 03:35:32.146156 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'vg_name': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}) 2026-02-09 03:35:32.146171 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'vg_name': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}) 2026-02-09 03:35:32.146182 | orchestrator | 2026-02-09 03:35:32.146193 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-09 03:35:32.146204 | orchestrator | Monday 09 February 2026 03:35:31 +0000 (0:00:00.160) 0:01:18.310 ******* 2026-02-09 03:35:32.146238 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:32.146257 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:32.146268 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:32.146279 | orchestrator | 2026-02-09 03:35:32.146290 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-09 03:35:32.146301 | orchestrator | Monday 09 February 2026 03:35:31 +0000 (0:00:00.145) 0:01:18.456 ******* 2026-02-09 03:35:32.146312 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:32.146323 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:32.146334 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:32.146345 | orchestrator | 2026-02-09 03:35:32.146356 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-09 03:35:32.146367 | orchestrator | Monday 09 February 2026 03:35:31 +0000 (0:00:00.154) 0:01:18.611 ******* 2026-02-09 03:35:32.146378 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 03:35:32.146389 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 03:35:32.146400 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:32.146411 | orchestrator | 2026-02-09 03:35:32.146421 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-09 03:35:32.146432 | orchestrator | Monday 09 February 2026 03:35:31 +0000 (0:00:00.157) 0:01:18.769 ******* 2026-02-09 03:35:32.146443 | orchestrator | ok: [testbed-node-5] => { 2026-02-09 03:35:32.146455 | orchestrator |  "lvm_report": { 2026-02-09 03:35:32.146466 | orchestrator |  "lv": [ 2026-02-09 03:35:32.146477 | orchestrator |  { 2026-02-09 03:35:32.146488 | orchestrator |  "lv_name": "osd-block-46be6a4f-1579-5910-a72e-9190b5238c92", 2026-02-09 03:35:32.146500 | orchestrator |  "vg_name": "ceph-46be6a4f-1579-5910-a72e-9190b5238c92" 2026-02-09 03:35:32.146511 | orchestrator |  }, 2026-02-09 03:35:32.146522 | orchestrator |  { 2026-02-09 03:35:32.146533 | orchestrator |  "lv_name": "osd-block-fca1079b-480c-5ada-8652-888828a580b6", 2026-02-09 03:35:32.146544 | orchestrator |  "vg_name": "ceph-fca1079b-480c-5ada-8652-888828a580b6" 2026-02-09 03:35:32.146555 | orchestrator |  } 2026-02-09 03:35:32.146566 | orchestrator |  ], 2026-02-09 03:35:32.146576 | orchestrator |  "pv": [ 2026-02-09 03:35:32.146587 | orchestrator |  { 2026-02-09 03:35:32.146598 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-09 03:35:32.146609 | orchestrator |  "vg_name": "ceph-46be6a4f-1579-5910-a72e-9190b5238c92" 2026-02-09 03:35:32.146620 | orchestrator |  }, 2026-02-09 03:35:32.146631 | orchestrator |  { 2026-02-09 03:35:32.146642 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-09 03:35:32.146666 | orchestrator |  "vg_name": "ceph-fca1079b-480c-5ada-8652-888828a580b6" 2026-02-09 03:35:32.146677 | orchestrator |  } 2026-02-09 03:35:32.146688 | orchestrator |  ] 2026-02-09 03:35:32.146698 | orchestrator |  } 2026-02-09 03:35:32.146710 | orchestrator | } 2026-02-09 03:35:32.146721 | orchestrator | 2026-02-09 03:35:32.146732 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:35:32.146743 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-09 03:35:32.146755 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-09 03:35:32.146766 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-09 03:35:32.146777 | orchestrator | 2026-02-09 03:35:32.146788 | orchestrator | 2026-02-09 03:35:32.146799 | orchestrator | 2026-02-09 03:35:32.146810 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:35:32.146821 | orchestrator | Monday 09 February 2026 03:35:32 +0000 (0:00:00.130) 0:01:18.899 ******* 2026-02-09 03:35:32.146832 | orchestrator | =============================================================================== 2026-02-09 03:35:32.146843 | orchestrator | Create block VGs -------------------------------------------------------- 5.59s 2026-02-09 03:35:32.146854 | orchestrator | Create block LVs -------------------------------------------------------- 4.13s 2026-02-09 03:35:32.146865 | orchestrator | Add known partitions to the list of available block devices ------------- 1.93s 2026-02-09 03:35:32.146876 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.76s 2026-02-09 03:35:32.146887 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.73s 2026-02-09 03:35:32.146898 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2026-02-09 03:35:32.146909 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2026-02-09 03:35:32.146919 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2026-02-09 03:35:32.146937 | orchestrator | Add known links to the list of available block devices ------------------ 1.45s 2026-02-09 03:35:32.452412 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2026-02-09 03:35:32.452568 | orchestrator | Add known links to the list of available block devices ------------------ 0.98s 2026-02-09 03:35:32.452585 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2026-02-09 03:35:32.452624 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2026-02-09 03:35:32.452636 | orchestrator | Calculate size needed for LVs on ceph_db_devices ------------------------ 0.86s 2026-02-09 03:35:32.452647 | orchestrator | Fail if number of OSDs exceeds num_osds for a DB+WAL VG ----------------- 0.84s 2026-02-09 03:35:32.452658 | orchestrator | Fail if DB LV size < 30 GiB for ceph_db_wal_devices --------------------- 0.82s 2026-02-09 03:35:32.452669 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2026-02-09 03:35:32.452680 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.80s 2026-02-09 03:35:32.452690 | orchestrator | Get initial list of available block devices ----------------------------- 0.80s 2026-02-09 03:35:32.452701 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-02-09 03:35:44.686283 | orchestrator | 2026-02-09 03:35:44 | INFO  | Task 64c83c8f-8097-4965-9499-b3f828dbd31f (facts) was prepared for execution. 2026-02-09 03:35:44.686403 | orchestrator | 2026-02-09 03:35:44 | INFO  | It takes a moment until task 64c83c8f-8097-4965-9499-b3f828dbd31f (facts) has been started and output is visible here. 2026-02-09 03:35:57.704564 | orchestrator | 2026-02-09 03:35:57.704675 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-09 03:35:57.704733 | orchestrator | 2026-02-09 03:35:57.704753 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-09 03:35:57.704768 | orchestrator | Monday 09 February 2026 03:35:48 +0000 (0:00:00.261) 0:00:00.261 ******* 2026-02-09 03:35:57.704782 | orchestrator | ok: [testbed-manager] 2026-02-09 03:35:57.704799 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:35:57.704813 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:35:57.704827 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:35:57.704840 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:35:57.704855 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:35:57.704870 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:35:57.704885 | orchestrator | 2026-02-09 03:35:57.704900 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-09 03:35:57.704915 | orchestrator | Monday 09 February 2026 03:35:50 +0000 (0:00:01.246) 0:00:01.508 ******* 2026-02-09 03:35:57.704931 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:35:57.704947 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:35:57.704962 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:35:57.705005 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:35:57.705021 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:35:57.705034 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:57.705047 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:57.705062 | orchestrator | 2026-02-09 03:35:57.705075 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-09 03:35:57.705088 | orchestrator | 2026-02-09 03:35:57.705101 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-09 03:35:57.705116 | orchestrator | Monday 09 February 2026 03:35:51 +0000 (0:00:01.282) 0:00:02.791 ******* 2026-02-09 03:35:57.705129 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:35:57.705144 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:35:57.705158 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:35:57.705173 | orchestrator | ok: [testbed-manager] 2026-02-09 03:35:57.705187 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:35:57.705201 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:35:57.705215 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:35:57.705227 | orchestrator | 2026-02-09 03:35:57.705241 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-09 03:35:57.705254 | orchestrator | 2026-02-09 03:35:57.705267 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-09 03:35:57.705280 | orchestrator | Monday 09 February 2026 03:35:56 +0000 (0:00:05.121) 0:00:07.912 ******* 2026-02-09 03:35:57.705294 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:35:57.705308 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:35:57.705321 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:35:57.705336 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:35:57.705350 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:35:57.705364 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:35:57.705379 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:35:57.705395 | orchestrator | 2026-02-09 03:35:57.705408 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:35:57.705423 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:35:57.705440 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:35:57.705456 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:35:57.705472 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:35:57.705488 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:35:57.705522 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:35:57.705537 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:35:57.705553 | orchestrator | 2026-02-09 03:35:57.705568 | orchestrator | 2026-02-09 03:35:57.705582 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:35:57.705618 | orchestrator | Monday 09 February 2026 03:35:57 +0000 (0:00:00.638) 0:00:08.550 ******* 2026-02-09 03:35:57.705633 | orchestrator | =============================================================================== 2026-02-09 03:35:57.705645 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.12s 2026-02-09 03:35:57.705659 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-02-09 03:35:57.705674 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-02-09 03:35:57.705688 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.64s 2026-02-09 03:36:00.076639 | orchestrator | 2026-02-09 03:36:00 | INFO  | Task 52e96267-daa4-40cd-8367-0ef4c5e88d7f (ceph) was prepared for execution. 2026-02-09 03:36:00.076710 | orchestrator | 2026-02-09 03:36:00 | INFO  | It takes a moment until task 52e96267-daa4-40cd-8367-0ef4c5e88d7f (ceph) has been started and output is visible here. 2026-02-09 03:36:20.215914 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-09 03:36:20.216133 | orchestrator | 2.16.14 2026-02-09 03:36:20.216165 | orchestrator | 2026-02-09 03:36:20.216238 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-09 03:36:20.216261 | orchestrator | 2026-02-09 03:36:20.216279 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-09 03:36:20.216296 | orchestrator | Monday 09 February 2026 03:36:06 +0000 (0:00:00.915) 0:00:00.915 ******* 2026-02-09 03:36:20.216314 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:36:20.216333 | orchestrator | 2026-02-09 03:36:20.216351 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-09 03:36:20.216369 | orchestrator | Monday 09 February 2026 03:36:07 +0000 (0:00:01.269) 0:00:02.185 ******* 2026-02-09 03:36:20.216387 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:20.216405 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:20.216422 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:20.216439 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:20.216457 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:20.216474 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:20.216494 | orchestrator | 2026-02-09 03:36:20.216512 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-09 03:36:20.216531 | orchestrator | Monday 09 February 2026 03:36:08 +0000 (0:00:01.318) 0:00:03.504 ******* 2026-02-09 03:36:20.216550 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:20.216567 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:20.216585 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:20.216602 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:20.216619 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:20.216637 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:20.216654 | orchestrator | 2026-02-09 03:36:20.216672 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-09 03:36:20.216690 | orchestrator | Monday 09 February 2026 03:36:09 +0000 (0:00:00.874) 0:00:04.379 ******* 2026-02-09 03:36:20.216708 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:20.216725 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:20.216743 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:20.216761 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:20.216811 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:20.216829 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:20.216845 | orchestrator | 2026-02-09 03:36:20.216863 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-09 03:36:20.216881 | orchestrator | Monday 09 February 2026 03:36:10 +0000 (0:00:00.955) 0:00:05.334 ******* 2026-02-09 03:36:20.216899 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:20.216916 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:20.216933 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:20.216950 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:20.217006 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:20.217026 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:20.217044 | orchestrator | 2026-02-09 03:36:20.217063 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-09 03:36:20.217083 | orchestrator | Monday 09 February 2026 03:36:11 +0000 (0:00:00.889) 0:00:06.223 ******* 2026-02-09 03:36:20.217101 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:20.217119 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:20.217137 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:20.217156 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:20.217176 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:20.217195 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:20.217213 | orchestrator | 2026-02-09 03:36:20.217232 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-09 03:36:20.217251 | orchestrator | Monday 09 February 2026 03:36:12 +0000 (0:00:00.633) 0:00:06.857 ******* 2026-02-09 03:36:20.217272 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:20.217292 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:20.217311 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:20.217330 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:20.217348 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:20.217367 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:20.217388 | orchestrator | 2026-02-09 03:36:20.217409 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-09 03:36:20.217429 | orchestrator | Monday 09 February 2026 03:36:12 +0000 (0:00:00.947) 0:00:07.804 ******* 2026-02-09 03:36:20.217448 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:20.217469 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:20.217486 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:20.217501 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:20.217519 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:20.217537 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:20.217554 | orchestrator | 2026-02-09 03:36:20.217572 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-09 03:36:20.217591 | orchestrator | Monday 09 February 2026 03:36:13 +0000 (0:00:00.650) 0:00:08.454 ******* 2026-02-09 03:36:20.217610 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:20.217629 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:20.217647 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:20.217665 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:20.217685 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:20.217726 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:20.217746 | orchestrator | 2026-02-09 03:36:20.217764 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-09 03:36:20.217781 | orchestrator | Monday 09 February 2026 03:36:14 +0000 (0:00:00.899) 0:00:09.354 ******* 2026-02-09 03:36:20.217799 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-09 03:36:20.217817 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 03:36:20.217835 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 03:36:20.217855 | orchestrator | 2026-02-09 03:36:20.217875 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-09 03:36:20.217894 | orchestrator | Monday 09 February 2026 03:36:15 +0000 (0:00:00.715) 0:00:10.069 ******* 2026-02-09 03:36:20.217936 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:20.217957 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:20.218010 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:20.218142 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:20.218166 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:20.218186 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:20.218204 | orchestrator | 2026-02-09 03:36:20.218222 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-09 03:36:20.218241 | orchestrator | Monday 09 February 2026 03:36:16 +0000 (0:00:00.889) 0:00:10.959 ******* 2026-02-09 03:36:20.218259 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-09 03:36:20.218278 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 03:36:20.218296 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 03:36:20.218314 | orchestrator | 2026-02-09 03:36:20.218332 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-09 03:36:20.218348 | orchestrator | Monday 09 February 2026 03:36:18 +0000 (0:00:02.556) 0:00:13.516 ******* 2026-02-09 03:36:20.218365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-09 03:36:20.218382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-09 03:36:20.218400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-09 03:36:20.218419 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:20.218437 | orchestrator | 2026-02-09 03:36:20.218457 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-09 03:36:20.218477 | orchestrator | Monday 09 February 2026 03:36:19 +0000 (0:00:00.479) 0:00:13.995 ******* 2026-02-09 03:36:20.218499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-09 03:36:20.218521 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-09 03:36:20.218536 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-09 03:36:20.218548 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:20.218558 | orchestrator | 2026-02-09 03:36:20.218569 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-09 03:36:20.218580 | orchestrator | Monday 09 February 2026 03:36:19 +0000 (0:00:00.651) 0:00:14.647 ******* 2026-02-09 03:36:20.218594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:20.218608 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:20.218619 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:20.218645 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:20.218656 | orchestrator | 2026-02-09 03:36:20.218676 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-09 03:36:20.218687 | orchestrator | Monday 09 February 2026 03:36:19 +0000 (0:00:00.178) 0:00:14.825 ******* 2026-02-09 03:36:20.218715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-09 03:36:17.086105', 'end': '2026-02-09 03:36:17.114545', 'delta': '0:00:00.028440', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-09 03:36:32.100827 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-09 03:36:17.615549', 'end': '2026-02-09 03:36:17.657567', 'delta': '0:00:00.042018', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-09 03:36:32.100929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-09 03:36:18.188935', 'end': '2026-02-09 03:36:18.237218', 'delta': '0:00:00.048283', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-09 03:36:32.100944 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.100984 | orchestrator | 2026-02-09 03:36:32.100995 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-09 03:36:32.101006 | orchestrator | Monday 09 February 2026 03:36:20 +0000 (0:00:00.218) 0:00:15.044 ******* 2026-02-09 03:36:32.101015 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:32.101024 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:32.101033 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:32.101042 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:32.101050 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:32.101061 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:32.101076 | orchestrator | 2026-02-09 03:36:32.101095 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-09 03:36:32.101115 | orchestrator | Monday 09 February 2026 03:36:21 +0000 (0:00:00.814) 0:00:15.858 ******* 2026-02-09 03:36:32.101148 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-09 03:36:32.101177 | orchestrator | 2026-02-09 03:36:32.101191 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-09 03:36:32.101205 | orchestrator | Monday 09 February 2026 03:36:22 +0000 (0:00:01.014) 0:00:16.872 ******* 2026-02-09 03:36:32.101249 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.101264 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:32.101278 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:32.101292 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:32.101304 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:32.101317 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:32.101331 | orchestrator | 2026-02-09 03:36:32.101344 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-09 03:36:32.101359 | orchestrator | Monday 09 February 2026 03:36:23 +0000 (0:00:00.985) 0:00:17.857 ******* 2026-02-09 03:36:32.101375 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.101391 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:32.101404 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:32.101420 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:32.101434 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:32.101447 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:32.101461 | orchestrator | 2026-02-09 03:36:32.101474 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 03:36:32.101491 | orchestrator | Monday 09 February 2026 03:36:24 +0000 (0:00:01.478) 0:00:19.336 ******* 2026-02-09 03:36:32.101507 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.101522 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:32.101537 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:32.101553 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:32.101569 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:32.101601 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:32.101617 | orchestrator | 2026-02-09 03:36:32.101632 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-09 03:36:32.101647 | orchestrator | Monday 09 February 2026 03:36:25 +0000 (0:00:00.649) 0:00:19.986 ******* 2026-02-09 03:36:32.101663 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.101678 | orchestrator | 2026-02-09 03:36:32.101694 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-09 03:36:32.101705 | orchestrator | Monday 09 February 2026 03:36:25 +0000 (0:00:00.180) 0:00:20.166 ******* 2026-02-09 03:36:32.101715 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.101725 | orchestrator | 2026-02-09 03:36:32.101735 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 03:36:32.101743 | orchestrator | Monday 09 February 2026 03:36:25 +0000 (0:00:00.266) 0:00:20.433 ******* 2026-02-09 03:36:32.101752 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.101760 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:32.101769 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:32.101777 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:32.101786 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:32.101795 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:32.101803 | orchestrator | 2026-02-09 03:36:32.101832 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-09 03:36:32.101842 | orchestrator | Monday 09 February 2026 03:36:26 +0000 (0:00:01.046) 0:00:21.480 ******* 2026-02-09 03:36:32.101851 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.101859 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:32.101867 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:32.101876 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:32.101884 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:32.101893 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:32.101901 | orchestrator | 2026-02-09 03:36:32.101909 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-09 03:36:32.101918 | orchestrator | Monday 09 February 2026 03:36:27 +0000 (0:00:00.688) 0:00:22.169 ******* 2026-02-09 03:36:32.101926 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.101935 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:32.101943 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:32.102083 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:32.102095 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:32.102104 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:32.102113 | orchestrator | 2026-02-09 03:36:32.102122 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-09 03:36:32.102131 | orchestrator | Monday 09 February 2026 03:36:28 +0000 (0:00:01.138) 0:00:23.307 ******* 2026-02-09 03:36:32.102140 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.102148 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:32.102157 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:32.102166 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:32.102174 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:32.102183 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:32.102192 | orchestrator | 2026-02-09 03:36:32.102200 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-09 03:36:32.102209 | orchestrator | Monday 09 February 2026 03:36:29 +0000 (0:00:00.795) 0:00:24.103 ******* 2026-02-09 03:36:32.102218 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.102226 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:32.102235 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:32.102243 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:32.102252 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:32.102261 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:32.102269 | orchestrator | 2026-02-09 03:36:32.102278 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-09 03:36:32.102287 | orchestrator | Monday 09 February 2026 03:36:30 +0000 (0:00:00.973) 0:00:25.076 ******* 2026-02-09 03:36:32.102295 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.102304 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:32.102312 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:32.102321 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:32.102329 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:32.102338 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:32.102347 | orchestrator | 2026-02-09 03:36:32.102355 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-09 03:36:32.102365 | orchestrator | Monday 09 February 2026 03:36:30 +0000 (0:00:00.737) 0:00:25.814 ******* 2026-02-09 03:36:32.102374 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.102382 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:32.102391 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:32.102400 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:32.102408 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:32.102417 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:32.102426 | orchestrator | 2026-02-09 03:36:32.102434 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-09 03:36:32.102443 | orchestrator | Monday 09 February 2026 03:36:31 +0000 (0:00:00.850) 0:00:26.665 ******* 2026-02-09 03:36:32.102455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b', 'dm-uuid-LVM-0WjeRAA0lqf3cpEn6bug4xs5UGMazLjB0h01y39wS0A1Owicu3DkC9MW8cY3xQUQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.102475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d', 'dm-uuid-LVM-5Oms0YhgvCVrWp80wJ4aA96yxcElodY708xUFI15dbkcdnHIR6L7mBfIOccNLzlf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.102508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109638 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3', 'dm-uuid-LVM-28EU5fYWgLFVVTr1j10NPpT02LXZ3m2dqNBTokCpiFfT2ODyZTZ76Gse0HWZzEjm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9', 'dm-uuid-LVM-3CHn6ZP2pM8HpEDxSzeilwVQRF6lfj6OM8VSybDQwMAeXi61wvDItRKk6IUvThlx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109746 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.109779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.109795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UekHwl-BrrL-tQwo-R3UW-N6L4-qGv4-ixmNDb', 'scsi-0QEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d', 'scsi-SQEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.203507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.203597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DXOpal-X33W-ipPf-IHHU-xTym-5svh-1uUmz7', 'scsi-0QEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4', 'scsi-SQEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.203608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.203618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa', 'scsi-SQEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.203626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.203666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.203673 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.203678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.203699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.203707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GwhUsL-bhJV-LTOj-ZPeb-I83T-YRPV-54WlOk', 'scsi-0QEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509', 'scsi-SQEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.203719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TEtRPa-KFlO-eA6E-SkhX-jKKT-2BmX-PRBRTw', 'scsi-0QEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c', 'scsi-SQEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.203728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24', 'scsi-SQEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.369503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.369577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92', 'dm-uuid-LVM-SZPyknUsbhfLaF3x5K31ctP0vcigu1Pwp97ku36NfSW31vos0Gj86u7MmrIxN6I0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.369584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6', 'dm-uuid-LVM-UtcmtJOb91d0iC1jVKeu7Rh960XYKnyIcb9DX8DrOUkJ6Npc5MMds8BTnO00gFXN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.369590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.369611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.369631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.369636 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.369649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.369667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.369672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.369676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.369685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.369695 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jvH3Zw-djyF-WIKe-T88H-f7IR-FEUt-vCkV4E', 'scsi-0QEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717', 'scsi-SQEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.369704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nj2fwl-jxqG-fYtS-q2di-jVVW-fVes-RibCJ0', 'scsi-0QEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46', 'scsi-SQEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.685643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0', 'scsi-SQEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.685727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.685758 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:32.685770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.685779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.685799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.685806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.685814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.685821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.685843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.685850 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:32.685858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.685873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.685891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:32.685899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.685906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:32.685941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:33.034701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:33.034710 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:33.034715 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:33.034719 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:33.034723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.034757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:36:33.266209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:33.266315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:36:33.266332 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:33.266345 | orchestrator | 2026-02-09 03:36:33.266357 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-09 03:36:33.266370 | orchestrator | Monday 09 February 2026 03:36:33 +0000 (0:00:01.193) 0:00:27.858 ******* 2026-02-09 03:36:33.266383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b', 'dm-uuid-LVM-0WjeRAA0lqf3cpEn6bug4xs5UGMazLjB0h01y39wS0A1Owicu3DkC9MW8cY3xQUQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.266437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d', 'dm-uuid-LVM-5Oms0YhgvCVrWp80wJ4aA96yxcElodY708xUFI15dbkcdnHIR6L7mBfIOccNLzlf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.266451 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.266466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.266493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.266512 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.266531 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.266580 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3', 'dm-uuid-LVM-28EU5fYWgLFVVTr1j10NPpT02LXZ3m2dqNBTokCpiFfT2ODyZTZ76Gse0HWZzEjm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.314349 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.314440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9', 'dm-uuid-LVM-3CHn6ZP2pM8HpEDxSzeilwVQRF6lfj6OM8VSybDQwMAeXi61wvDItRKk6IUvThlx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.314469 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.314480 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.314490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.314519 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.314558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.314571 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.314581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UekHwl-BrrL-tQwo-R3UW-N6L4-qGv4-ixmNDb', 'scsi-0QEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d', 'scsi-SQEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.314604 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.772471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DXOpal-X33W-ipPf-IHHU-xTym-5svh-1uUmz7', 'scsi-0QEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4', 'scsi-SQEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.772597 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.772617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa', 'scsi-SQEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.772632 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.772666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.772701 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.772714 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.772736 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.772768 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GwhUsL-bhJV-LTOj-ZPeb-I83T-YRPV-54WlOk', 'scsi-0QEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509', 'scsi-SQEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.896784 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:33.896877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TEtRPa-KFlO-eA6E-SkhX-jKKT-2BmX-PRBRTw', 'scsi-0QEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c', 'scsi-SQEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.896903 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24', 'scsi-SQEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.896913 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.896936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92', 'dm-uuid-LVM-SZPyknUsbhfLaF3x5K31ctP0vcigu1Pwp97ku36NfSW31vos0Gj86u7MmrIxN6I0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.896999 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6', 'dm-uuid-LVM-UtcmtJOb91d0iC1jVKeu7Rh960XYKnyIcb9DX8DrOUkJ6Npc5MMds8BTnO00gFXN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.897013 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.897026 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.897044 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.897055 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.897069 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.897076 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:33.897083 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.897097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.989710 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.989812 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.989842 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.989867 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jvH3Zw-djyF-WIKe-T88H-f7IR-FEUt-vCkV4E', 'scsi-0QEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717', 'scsi-SQEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.989876 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.989883 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nj2fwl-jxqG-fYtS-q2di-jVVW-fVes-RibCJ0', 'scsi-0QEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46', 'scsi-SQEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.989901 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.989908 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.989915 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0', 'scsi-SQEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:33.989974 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.162080 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.162190 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.162221 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.162230 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.162261 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.162277 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.162291 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:34.162300 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.162307 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.162314 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.162321 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.162333 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.393614 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.393702 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.393709 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.393728 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.393743 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.393749 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:34.393754 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:34.393759 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.393763 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.393768 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.393772 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.393777 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:34.393792 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:41.644407 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:41.644508 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:41.644528 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:41.644597 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:36:41.644613 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:41.644626 | orchestrator | 2026-02-09 03:36:41.644638 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-09 03:36:41.644649 | orchestrator | Monday 09 February 2026 03:36:34 +0000 (0:00:01.368) 0:00:29.226 ******* 2026-02-09 03:36:41.644660 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:41.644671 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:41.644681 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:41.644692 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:41.644702 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:41.644712 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:41.644723 | orchestrator | 2026-02-09 03:36:41.644734 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-09 03:36:41.644744 | orchestrator | Monday 09 February 2026 03:36:35 +0000 (0:00:00.986) 0:00:30.213 ******* 2026-02-09 03:36:41.644768 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:41.644777 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:41.644787 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:41.644815 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:41.644834 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:41.644843 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:41.644853 | orchestrator | 2026-02-09 03:36:41.644863 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 03:36:41.644875 | orchestrator | Monday 09 February 2026 03:36:36 +0000 (0:00:00.857) 0:00:31.070 ******* 2026-02-09 03:36:41.644886 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:41.644897 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:41.644908 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:41.644919 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:41.644931 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:41.644941 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:41.644981 | orchestrator | 2026-02-09 03:36:41.644993 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 03:36:41.645004 | orchestrator | Monday 09 February 2026 03:36:36 +0000 (0:00:00.620) 0:00:31.691 ******* 2026-02-09 03:36:41.645016 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:41.645027 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:41.645038 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:41.645049 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:41.645106 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:41.645118 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:41.645128 | orchestrator | 2026-02-09 03:36:41.645138 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 03:36:41.645148 | orchestrator | Monday 09 February 2026 03:36:37 +0000 (0:00:00.864) 0:00:32.555 ******* 2026-02-09 03:36:41.645157 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:41.645167 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:41.645177 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:41.645196 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:41.645206 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:41.645216 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:41.645225 | orchestrator | 2026-02-09 03:36:41.645235 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 03:36:41.645245 | orchestrator | Monday 09 February 2026 03:36:38 +0000 (0:00:00.722) 0:00:33.278 ******* 2026-02-09 03:36:41.645254 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:41.645264 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:41.645274 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:41.645283 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:41.645293 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:41.645302 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:41.645312 | orchestrator | 2026-02-09 03:36:41.645321 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-09 03:36:41.645331 | orchestrator | Monday 09 February 2026 03:36:39 +0000 (0:00:00.925) 0:00:34.203 ******* 2026-02-09 03:36:41.645371 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-09 03:36:41.645382 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-09 03:36:41.645391 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-09 03:36:41.645401 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-09 03:36:41.645410 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-09 03:36:41.645420 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-09 03:36:41.645429 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-09 03:36:41.645439 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 03:36:41.645449 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-09 03:36:41.645458 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-09 03:36:41.645468 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-09 03:36:41.645477 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-09 03:36:41.645487 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-09 03:36:41.645496 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-09 03:36:41.645506 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-09 03:36:41.645516 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-09 03:36:41.645525 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-09 03:36:41.645540 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-09 03:36:41.645550 | orchestrator | 2026-02-09 03:36:41.645560 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-09 03:36:41.645570 | orchestrator | Monday 09 February 2026 03:36:41 +0000 (0:00:01.774) 0:00:35.978 ******* 2026-02-09 03:36:41.645579 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-09 03:36:41.645589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-09 03:36:41.645599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-09 03:36:41.645609 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:41.645626 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-09 03:36:57.620155 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-09 03:36:57.620252 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-09 03:36:57.620264 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:57.620273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-09 03:36:57.620282 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-09 03:36:57.620290 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-09 03:36:57.620298 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:57.620307 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 03:36:57.620315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 03:36:57.620346 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 03:36:57.620354 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:57.620362 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-09 03:36:57.620370 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-09 03:36:57.620378 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-09 03:36:57.620386 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:57.620394 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-09 03:36:57.620402 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-09 03:36:57.620409 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-09 03:36:57.620417 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:57.620425 | orchestrator | 2026-02-09 03:36:57.620434 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-09 03:36:57.620444 | orchestrator | Monday 09 February 2026 03:36:42 +0000 (0:00:01.002) 0:00:36.981 ******* 2026-02-09 03:36:57.620451 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:57.620459 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:57.620467 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:57.620475 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:36:57.620483 | orchestrator | 2026-02-09 03:36:57.620492 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-09 03:36:57.620501 | orchestrator | Monday 09 February 2026 03:36:43 +0000 (0:00:01.084) 0:00:38.065 ******* 2026-02-09 03:36:57.620508 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:57.620517 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:57.620525 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:57.620532 | orchestrator | 2026-02-09 03:36:57.620540 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-09 03:36:57.620548 | orchestrator | Monday 09 February 2026 03:36:43 +0000 (0:00:00.379) 0:00:38.444 ******* 2026-02-09 03:36:57.620556 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:57.620564 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:57.620571 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:57.620579 | orchestrator | 2026-02-09 03:36:57.620587 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-09 03:36:57.620594 | orchestrator | Monday 09 February 2026 03:36:43 +0000 (0:00:00.358) 0:00:38.803 ******* 2026-02-09 03:36:57.620602 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:57.620610 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:57.620617 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:57.620625 | orchestrator | 2026-02-09 03:36:57.620633 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-09 03:36:57.620640 | orchestrator | Monday 09 February 2026 03:36:44 +0000 (0:00:00.323) 0:00:39.127 ******* 2026-02-09 03:36:57.620648 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:57.620658 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:57.620668 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:57.620677 | orchestrator | 2026-02-09 03:36:57.620686 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-09 03:36:57.620695 | orchestrator | Monday 09 February 2026 03:36:45 +0000 (0:00:00.719) 0:00:39.846 ******* 2026-02-09 03:36:57.620705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:36:57.620714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:36:57.620723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:36:57.620733 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:57.620742 | orchestrator | 2026-02-09 03:36:57.620751 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-09 03:36:57.620766 | orchestrator | Monday 09 February 2026 03:36:45 +0000 (0:00:00.414) 0:00:40.260 ******* 2026-02-09 03:36:57.620775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:36:57.620784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:36:57.620794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:36:57.620841 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:57.620850 | orchestrator | 2026-02-09 03:36:57.620859 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-09 03:36:57.620869 | orchestrator | Monday 09 February 2026 03:36:45 +0000 (0:00:00.411) 0:00:40.671 ******* 2026-02-09 03:36:57.620900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:36:57.620910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:36:57.620919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:36:57.620929 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:57.620938 | orchestrator | 2026-02-09 03:36:57.620965 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-09 03:36:57.620993 | orchestrator | Monday 09 February 2026 03:36:46 +0000 (0:00:00.413) 0:00:41.085 ******* 2026-02-09 03:36:57.621002 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:57.621011 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:57.621021 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:57.621030 | orchestrator | 2026-02-09 03:36:57.621053 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-09 03:36:57.621062 | orchestrator | Monday 09 February 2026 03:36:46 +0000 (0:00:00.342) 0:00:41.428 ******* 2026-02-09 03:36:57.621070 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-09 03:36:57.621078 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-09 03:36:57.621086 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-09 03:36:57.621093 | orchestrator | 2026-02-09 03:36:57.621101 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-09 03:36:57.621109 | orchestrator | Monday 09 February 2026 03:36:47 +0000 (0:00:01.054) 0:00:42.482 ******* 2026-02-09 03:36:57.621117 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-09 03:36:57.621125 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 03:36:57.621133 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 03:36:57.621141 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-09 03:36:57.621149 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 03:36:57.621156 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 03:36:57.621166 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 03:36:57.621180 | orchestrator | 2026-02-09 03:36:57.621192 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-09 03:36:57.621205 | orchestrator | Monday 09 February 2026 03:36:48 +0000 (0:00:00.887) 0:00:43.370 ******* 2026-02-09 03:36:57.621218 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-09 03:36:57.621231 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 03:36:57.621245 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 03:36:57.621257 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-09 03:36:57.621268 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 03:36:57.621279 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 03:36:57.621291 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 03:36:57.621303 | orchestrator | 2026-02-09 03:36:57.621314 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-09 03:36:57.621335 | orchestrator | Monday 09 February 2026 03:36:50 +0000 (0:00:02.263) 0:00:45.633 ******* 2026-02-09 03:36:57.621349 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:36:57.621363 | orchestrator | 2026-02-09 03:36:57.621377 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-09 03:36:57.621391 | orchestrator | Monday 09 February 2026 03:36:52 +0000 (0:00:01.447) 0:00:47.081 ******* 2026-02-09 03:36:57.621403 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:36:57.621415 | orchestrator | 2026-02-09 03:36:57.621428 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-09 03:36:57.621440 | orchestrator | Monday 09 February 2026 03:36:53 +0000 (0:00:01.505) 0:00:48.586 ******* 2026-02-09 03:36:57.621453 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:36:57.621466 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:36:57.621479 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:36:57.621492 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:36:57.621504 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:36:57.621517 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:36:57.621531 | orchestrator | 2026-02-09 03:36:57.621545 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-09 03:36:57.621559 | orchestrator | Monday 09 February 2026 03:36:55 +0000 (0:00:01.360) 0:00:49.947 ******* 2026-02-09 03:36:57.621571 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:57.621585 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:57.621597 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:57.621611 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:57.621623 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:57.621637 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:57.621650 | orchestrator | 2026-02-09 03:36:57.621663 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-09 03:36:57.621675 | orchestrator | Monday 09 February 2026 03:36:55 +0000 (0:00:00.741) 0:00:50.689 ******* 2026-02-09 03:36:57.621688 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:57.621700 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:57.621712 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:57.621726 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:57.621739 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:57.621751 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:57.621773 | orchestrator | 2026-02-09 03:36:57.621787 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-09 03:36:57.621800 | orchestrator | Monday 09 February 2026 03:36:56 +0000 (0:00:00.993) 0:00:51.682 ******* 2026-02-09 03:36:57.621814 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:36:57.621827 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:36:57.621840 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:36:57.621853 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:36:57.621866 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:36:57.621880 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:36:57.621893 | orchestrator | 2026-02-09 03:36:57.621907 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-09 03:36:57.621937 | orchestrator | Monday 09 February 2026 03:36:57 +0000 (0:00:00.761) 0:00:52.444 ******* 2026-02-09 03:37:19.457616 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:37:19.457716 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:37:19.457724 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:37:19.457731 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:37:19.457739 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:37:19.457745 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:37:19.457752 | orchestrator | 2026-02-09 03:37:19.457760 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-09 03:37:19.457792 | orchestrator | Monday 09 February 2026 03:36:58 +0000 (0:00:01.316) 0:00:53.760 ******* 2026-02-09 03:37:19.457799 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:37:19.457806 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:37:19.457812 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:37:19.457818 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:37:19.457825 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:37:19.457831 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:37:19.457837 | orchestrator | 2026-02-09 03:37:19.457844 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-09 03:37:19.457851 | orchestrator | Monday 09 February 2026 03:36:59 +0000 (0:00:00.685) 0:00:54.446 ******* 2026-02-09 03:37:19.457858 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:37:19.457865 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:37:19.457872 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:37:19.457878 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:37:19.457884 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:37:19.457891 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:37:19.457897 | orchestrator | 2026-02-09 03:37:19.457903 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-09 03:37:19.457910 | orchestrator | Monday 09 February 2026 03:37:00 +0000 (0:00:00.893) 0:00:55.340 ******* 2026-02-09 03:37:19.457917 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:37:19.457923 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:37:19.457930 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:37:19.458008 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:37:19.458064 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:37:19.458072 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:37:19.458078 | orchestrator | 2026-02-09 03:37:19.458085 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-09 03:37:19.458092 | orchestrator | Monday 09 February 2026 03:37:01 +0000 (0:00:01.071) 0:00:56.411 ******* 2026-02-09 03:37:19.458099 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:37:19.458106 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:37:19.458113 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:37:19.458120 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:37:19.458127 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:37:19.458133 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:37:19.458140 | orchestrator | 2026-02-09 03:37:19.458146 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-09 03:37:19.458154 | orchestrator | Monday 09 February 2026 03:37:02 +0000 (0:00:01.333) 0:00:57.745 ******* 2026-02-09 03:37:19.458161 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:37:19.458168 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:37:19.458175 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:37:19.458183 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:37:19.458190 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:37:19.458197 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:37:19.458204 | orchestrator | 2026-02-09 03:37:19.458211 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-09 03:37:19.458219 | orchestrator | Monday 09 February 2026 03:37:03 +0000 (0:00:00.682) 0:00:58.427 ******* 2026-02-09 03:37:19.458226 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:37:19.458232 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:37:19.458240 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:37:19.458247 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:37:19.458254 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:37:19.458261 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:37:19.458267 | orchestrator | 2026-02-09 03:37:19.458274 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-09 03:37:19.458282 | orchestrator | Monday 09 February 2026 03:37:04 +0000 (0:00:00.918) 0:00:59.345 ******* 2026-02-09 03:37:19.458289 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:37:19.458296 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:37:19.458311 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:37:19.458318 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:37:19.458326 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:37:19.458333 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:37:19.458340 | orchestrator | 2026-02-09 03:37:19.458347 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-09 03:37:19.458355 | orchestrator | Monday 09 February 2026 03:37:05 +0000 (0:00:00.656) 0:01:00.002 ******* 2026-02-09 03:37:19.458363 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:37:19.458370 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:37:19.458377 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:37:19.458384 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:37:19.458391 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:37:19.458399 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:37:19.458406 | orchestrator | 2026-02-09 03:37:19.458413 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-09 03:37:19.458420 | orchestrator | Monday 09 February 2026 03:37:06 +0000 (0:00:00.971) 0:01:00.973 ******* 2026-02-09 03:37:19.458427 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:37:19.458435 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:37:19.458442 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:37:19.458449 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:37:19.458456 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:37:19.458478 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:37:19.458483 | orchestrator | 2026-02-09 03:37:19.458490 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-09 03:37:19.458496 | orchestrator | Monday 09 February 2026 03:37:06 +0000 (0:00:00.652) 0:01:01.625 ******* 2026-02-09 03:37:19.458503 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:37:19.458509 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:37:19.458515 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:37:19.458522 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:37:19.458528 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:37:19.458535 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:37:19.458541 | orchestrator | 2026-02-09 03:37:19.458549 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-09 03:37:19.458574 | orchestrator | Monday 09 February 2026 03:37:07 +0000 (0:00:00.897) 0:01:02.523 ******* 2026-02-09 03:37:19.458580 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:37:19.458586 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:37:19.458593 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:37:19.458599 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:37:19.458605 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:37:19.458611 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:37:19.458617 | orchestrator | 2026-02-09 03:37:19.458623 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-09 03:37:19.458629 | orchestrator | Monday 09 February 2026 03:37:08 +0000 (0:00:00.608) 0:01:03.131 ******* 2026-02-09 03:37:19.458635 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:37:19.458642 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:37:19.458648 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:37:19.458655 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:37:19.458660 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:37:19.458666 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:37:19.458673 | orchestrator | 2026-02-09 03:37:19.458678 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-09 03:37:19.458684 | orchestrator | Monday 09 February 2026 03:37:09 +0000 (0:00:00.866) 0:01:03.998 ******* 2026-02-09 03:37:19.458690 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:37:19.458696 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:37:19.458701 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:37:19.458707 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:37:19.458713 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:37:19.458719 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:37:19.458732 | orchestrator | 2026-02-09 03:37:19.458739 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-09 03:37:19.458745 | orchestrator | Monday 09 February 2026 03:37:09 +0000 (0:00:00.661) 0:01:04.659 ******* 2026-02-09 03:37:19.458751 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:37:19.458757 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:37:19.458770 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:37:19.458776 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:37:19.458782 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:37:19.458788 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:37:19.458794 | orchestrator | 2026-02-09 03:37:19.458801 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-09 03:37:19.458807 | orchestrator | Monday 09 February 2026 03:37:11 +0000 (0:00:01.381) 0:01:06.041 ******* 2026-02-09 03:37:19.458813 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:37:19.458819 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:37:19.458825 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:37:19.458830 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:37:19.458837 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:37:19.458843 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:37:19.458849 | orchestrator | 2026-02-09 03:37:19.458855 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-09 03:37:19.458861 | orchestrator | Monday 09 February 2026 03:37:12 +0000 (0:00:01.766) 0:01:07.807 ******* 2026-02-09 03:37:19.458867 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:37:19.458873 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:37:19.458880 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:37:19.458886 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:37:19.458892 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:37:19.458899 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:37:19.458904 | orchestrator | 2026-02-09 03:37:19.458910 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-09 03:37:19.458916 | orchestrator | Monday 09 February 2026 03:37:15 +0000 (0:00:02.046) 0:01:09.854 ******* 2026-02-09 03:37:19.458923 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:37:19.458931 | orchestrator | 2026-02-09 03:37:19.458952 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-09 03:37:19.458958 | orchestrator | Monday 09 February 2026 03:37:16 +0000 (0:00:01.541) 0:01:11.395 ******* 2026-02-09 03:37:19.458965 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:37:19.458971 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:37:19.458978 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:37:19.458983 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:37:19.458989 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:37:19.458995 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:37:19.459000 | orchestrator | 2026-02-09 03:37:19.459007 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-09 03:37:19.459013 | orchestrator | Monday 09 February 2026 03:37:17 +0000 (0:00:00.649) 0:01:12.044 ******* 2026-02-09 03:37:19.459019 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:37:19.459025 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:37:19.459031 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:37:19.459037 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:37:19.459042 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:37:19.459048 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:37:19.459054 | orchestrator | 2026-02-09 03:37:19.459060 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-09 03:37:19.459066 | orchestrator | Monday 09 February 2026 03:37:18 +0000 (0:00:00.844) 0:01:12.889 ******* 2026-02-09 03:37:19.459072 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-09 03:37:19.459082 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-09 03:37:19.459094 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-09 03:37:19.459099 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-09 03:37:19.459105 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-09 03:37:19.459111 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-09 03:37:19.459118 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-09 03:37:19.459131 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-09 03:38:31.404059 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-09 03:38:31.404174 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-09 03:38:31.404189 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-09 03:38:31.404199 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-09 03:38:31.404208 | orchestrator | 2026-02-09 03:38:31.404218 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-09 03:38:31.404226 | orchestrator | Monday 09 February 2026 03:37:19 +0000 (0:00:01.396) 0:01:14.286 ******* 2026-02-09 03:38:31.404235 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:38:31.404244 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:38:31.404253 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:38:31.404262 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:38:31.404270 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:38:31.404279 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:38:31.404288 | orchestrator | 2026-02-09 03:38:31.404297 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-09 03:38:31.404306 | orchestrator | Monday 09 February 2026 03:37:20 +0000 (0:00:01.170) 0:01:15.457 ******* 2026-02-09 03:38:31.404315 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:31.404322 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:31.404331 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:31.404341 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:31.404350 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:31.404359 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:31.404368 | orchestrator | 2026-02-09 03:38:31.404377 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-09 03:38:31.404386 | orchestrator | Monday 09 February 2026 03:37:21 +0000 (0:00:00.705) 0:01:16.162 ******* 2026-02-09 03:38:31.404395 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:31.404404 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:31.404413 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:31.404421 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:31.404430 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:31.404439 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:31.404448 | orchestrator | 2026-02-09 03:38:31.404457 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-09 03:38:31.404467 | orchestrator | Monday 09 February 2026 03:37:22 +0000 (0:00:00.970) 0:01:17.133 ******* 2026-02-09 03:38:31.404476 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:31.404485 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:31.404493 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:31.404502 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:31.404511 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:31.404519 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:31.404529 | orchestrator | 2026-02-09 03:38:31.404557 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-09 03:38:31.404568 | orchestrator | Monday 09 February 2026 03:37:22 +0000 (0:00:00.634) 0:01:17.768 ******* 2026-02-09 03:38:31.404606 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:38:31.404618 | orchestrator | 2026-02-09 03:38:31.404627 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-09 03:38:31.404637 | orchestrator | Monday 09 February 2026 03:37:24 +0000 (0:00:01.335) 0:01:19.103 ******* 2026-02-09 03:38:31.404647 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:38:31.404658 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:38:31.404669 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:38:31.404678 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:38:31.404688 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:38:31.404697 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:38:31.404706 | orchestrator | 2026-02-09 03:38:31.404716 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-09 03:38:31.404726 | orchestrator | Monday 09 February 2026 03:38:20 +0000 (0:00:55.981) 0:02:15.085 ******* 2026-02-09 03:38:31.404735 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-09 03:38:31.404745 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-09 03:38:31.404753 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-09 03:38:31.404763 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:31.404772 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-09 03:38:31.404780 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-09 03:38:31.404789 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-09 03:38:31.404797 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:31.404806 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-09 03:38:31.404814 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-09 03:38:31.404836 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-09 03:38:31.404845 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:31.404854 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-09 03:38:31.404863 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-09 03:38:31.404871 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-09 03:38:31.404879 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:31.404887 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-09 03:38:31.404937 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-09 03:38:31.404946 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-09 03:38:31.404953 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:31.404961 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-09 03:38:31.404969 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-09 03:38:31.404976 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-09 03:38:31.404983 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:31.404990 | orchestrator | 2026-02-09 03:38:31.404997 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-09 03:38:31.405005 | orchestrator | Monday 09 February 2026 03:38:20 +0000 (0:00:00.711) 0:02:15.797 ******* 2026-02-09 03:38:31.405012 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:31.405019 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:31.405026 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:31.405034 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:31.405041 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:31.405058 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:31.405065 | orchestrator | 2026-02-09 03:38:31.405072 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-09 03:38:31.405080 | orchestrator | Monday 09 February 2026 03:38:21 +0000 (0:00:00.872) 0:02:16.669 ******* 2026-02-09 03:38:31.405088 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:31.405095 | orchestrator | 2026-02-09 03:38:31.405103 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-09 03:38:31.405110 | orchestrator | Monday 09 February 2026 03:38:21 +0000 (0:00:00.169) 0:02:16.839 ******* 2026-02-09 03:38:31.405118 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:31.405125 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:31.405133 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:31.405139 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:31.405147 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:31.405154 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:31.405162 | orchestrator | 2026-02-09 03:38:31.405169 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-09 03:38:31.405177 | orchestrator | Monday 09 February 2026 03:38:22 +0000 (0:00:00.711) 0:02:17.550 ******* 2026-02-09 03:38:31.405185 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:31.405192 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:31.405201 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:31.405208 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:31.405217 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:31.405225 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:31.405232 | orchestrator | 2026-02-09 03:38:31.405240 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-09 03:38:31.405248 | orchestrator | Monday 09 February 2026 03:38:23 +0000 (0:00:00.929) 0:02:18.480 ******* 2026-02-09 03:38:31.405256 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:31.405264 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:31.405271 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:31.405280 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:31.405287 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:31.405295 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:31.405302 | orchestrator | 2026-02-09 03:38:31.405309 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-09 03:38:31.405316 | orchestrator | Monday 09 February 2026 03:38:24 +0000 (0:00:00.661) 0:02:19.141 ******* 2026-02-09 03:38:31.405324 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:38:31.405332 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:38:31.405340 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:38:31.405347 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:38:31.405355 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:38:31.405363 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:38:31.405370 | orchestrator | 2026-02-09 03:38:31.405378 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-09 03:38:31.405387 | orchestrator | Monday 09 February 2026 03:38:27 +0000 (0:00:03.415) 0:02:22.557 ******* 2026-02-09 03:38:31.405392 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:38:31.405396 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:38:31.405401 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:38:31.405406 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:38:31.405411 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:38:31.405415 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:38:31.405420 | orchestrator | 2026-02-09 03:38:31.405425 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-09 03:38:31.405429 | orchestrator | Monday 09 February 2026 03:38:28 +0000 (0:00:00.647) 0:02:23.205 ******* 2026-02-09 03:38:31.405435 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:38:31.405442 | orchestrator | 2026-02-09 03:38:31.405446 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-09 03:38:31.405459 | orchestrator | Monday 09 February 2026 03:38:29 +0000 (0:00:01.421) 0:02:24.626 ******* 2026-02-09 03:38:31.405464 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:31.405468 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:31.405473 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:31.405478 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:31.405488 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:31.405493 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:31.405498 | orchestrator | 2026-02-09 03:38:31.405503 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-09 03:38:31.405507 | orchestrator | Monday 09 February 2026 03:38:30 +0000 (0:00:00.917) 0:02:25.543 ******* 2026-02-09 03:38:31.405512 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:31.405517 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:31.405521 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:31.405526 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:31.405531 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:31.405536 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:31.405540 | orchestrator | 2026-02-09 03:38:31.405545 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-09 03:38:31.405558 | orchestrator | Monday 09 February 2026 03:38:31 +0000 (0:00:00.680) 0:02:26.224 ******* 2026-02-09 03:38:45.599071 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:45.599178 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:45.599193 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:45.599205 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:45.599215 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:45.599227 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:45.599238 | orchestrator | 2026-02-09 03:38:45.599251 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-09 03:38:45.599263 | orchestrator | Monday 09 February 2026 03:38:32 +0000 (0:00:01.173) 0:02:27.397 ******* 2026-02-09 03:38:45.599274 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:45.599285 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:45.599296 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:45.599306 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:45.599317 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:45.599327 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:45.599338 | orchestrator | 2026-02-09 03:38:45.599349 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-09 03:38:45.599360 | orchestrator | Monday 09 February 2026 03:38:33 +0000 (0:00:00.675) 0:02:28.073 ******* 2026-02-09 03:38:45.599371 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:45.599381 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:45.599392 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:45.599402 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:45.599413 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:45.599423 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:45.599434 | orchestrator | 2026-02-09 03:38:45.599444 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-09 03:38:45.599455 | orchestrator | Monday 09 February 2026 03:38:34 +0000 (0:00:00.955) 0:02:29.028 ******* 2026-02-09 03:38:45.599466 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:45.599476 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:45.599487 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:45.599497 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:45.599508 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:45.599519 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:45.599532 | orchestrator | 2026-02-09 03:38:45.599545 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-09 03:38:45.599557 | orchestrator | Monday 09 February 2026 03:38:34 +0000 (0:00:00.646) 0:02:29.675 ******* 2026-02-09 03:38:45.599592 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:45.599605 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:45.599618 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:45.599630 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:45.599643 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:45.599655 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:45.599667 | orchestrator | 2026-02-09 03:38:45.599679 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-09 03:38:45.599692 | orchestrator | Monday 09 February 2026 03:38:35 +0000 (0:00:00.899) 0:02:30.574 ******* 2026-02-09 03:38:45.599704 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:38:45.599717 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:38:45.599729 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:38:45.599741 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:38:45.599754 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:38:45.599767 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:38:45.599778 | orchestrator | 2026-02-09 03:38:45.599791 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-09 03:38:45.599848 | orchestrator | Monday 09 February 2026 03:38:36 +0000 (0:00:00.650) 0:02:31.225 ******* 2026-02-09 03:38:45.599875 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:38:45.599894 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:38:45.599936 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:38:45.599955 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:38:45.599973 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:38:45.599990 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:38:45.600008 | orchestrator | 2026-02-09 03:38:45.600024 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-09 03:38:45.600042 | orchestrator | Monday 09 February 2026 03:38:37 +0000 (0:00:01.377) 0:02:32.602 ******* 2026-02-09 03:38:45.600060 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:38:45.600079 | orchestrator | 2026-02-09 03:38:45.600097 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-09 03:38:45.600115 | orchestrator | Monday 09 February 2026 03:38:39 +0000 (0:00:01.319) 0:02:33.922 ******* 2026-02-09 03:38:45.600134 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-09 03:38:45.600152 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-09 03:38:45.600172 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-09 03:38:45.600190 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-09 03:38:45.600205 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-09 03:38:45.600216 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-09 03:38:45.600227 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-09 03:38:45.600253 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-09 03:38:45.600266 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-09 03:38:45.600283 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-09 03:38:45.600300 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-09 03:38:45.600318 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-09 03:38:45.600336 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-09 03:38:45.600354 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-09 03:38:45.600370 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-09 03:38:45.600382 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-09 03:38:45.600392 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-09 03:38:45.600425 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-09 03:38:45.600437 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-09 03:38:45.600462 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-09 03:38:45.600473 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-09 03:38:45.600484 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-09 03:38:45.600495 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-09 03:38:45.600505 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-09 03:38:45.600516 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-09 03:38:45.600526 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-09 03:38:45.600537 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-09 03:38:45.600548 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-09 03:38:45.600558 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-09 03:38:45.600569 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-09 03:38:45.600579 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-09 03:38:45.600590 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-09 03:38:45.600600 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-09 03:38:45.600611 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-09 03:38:45.600621 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-09 03:38:45.600632 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-09 03:38:45.600643 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-09 03:38:45.600653 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-09 03:38:45.600664 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-09 03:38:45.600674 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-09 03:38:45.600685 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-09 03:38:45.600695 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-09 03:38:45.600706 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-09 03:38:45.600716 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-09 03:38:45.600727 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-09 03:38:45.600737 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-09 03:38:45.600748 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-09 03:38:45.600759 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-09 03:38:45.600769 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-09 03:38:45.600780 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-09 03:38:45.600790 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-09 03:38:45.600801 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-09 03:38:45.600811 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-09 03:38:45.600822 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-09 03:38:45.600832 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-09 03:38:45.600843 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-09 03:38:45.600853 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-09 03:38:45.600864 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-09 03:38:45.600875 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-09 03:38:45.600885 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-09 03:38:45.600896 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-09 03:38:45.600942 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-09 03:38:45.600953 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-09 03:38:45.600964 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-09 03:38:45.600974 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-09 03:38:45.600985 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-09 03:38:45.600996 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-09 03:38:45.601013 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-09 03:38:45.601024 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-09 03:38:45.601035 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-09 03:38:45.601046 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-09 03:38:45.601056 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-09 03:38:45.601067 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-09 03:38:45.601077 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-09 03:38:45.601088 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-09 03:38:45.601106 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-09 03:39:00.581726 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-09 03:39:00.581835 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-09 03:39:00.581850 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-09 03:39:00.581862 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-09 03:39:00.581874 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-09 03:39:00.581884 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-09 03:39:00.581961 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-09 03:39:00.581975 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-09 03:39:00.581985 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-09 03:39:00.581994 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-09 03:39:00.582005 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-09 03:39:00.582014 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-09 03:39:00.582125 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-09 03:39:00.582136 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-09 03:39:00.582147 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-09 03:39:00.582157 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-09 03:39:00.582167 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-09 03:39:00.582176 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-09 03:39:00.582186 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-09 03:39:00.582196 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-09 03:39:00.582206 | orchestrator | 2026-02-09 03:39:00.582217 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-09 03:39:00.582227 | orchestrator | Monday 09 February 2026 03:38:45 +0000 (0:00:06.448) 0:02:40.370 ******* 2026-02-09 03:39:00.582237 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.582248 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.582258 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.582271 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:39:00.582311 | orchestrator | 2026-02-09 03:39:00.582324 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-09 03:39:00.582336 | orchestrator | Monday 09 February 2026 03:38:46 +0000 (0:00:01.115) 0:02:41.486 ******* 2026-02-09 03:39:00.582348 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-09 03:39:00.582361 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-09 03:39:00.582373 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-09 03:39:00.582384 | orchestrator | 2026-02-09 03:39:00.582396 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-09 03:39:00.582407 | orchestrator | Monday 09 February 2026 03:38:47 +0000 (0:00:00.723) 0:02:42.209 ******* 2026-02-09 03:39:00.582419 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-09 03:39:00.582431 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-09 03:39:00.582443 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-09 03:39:00.582455 | orchestrator | 2026-02-09 03:39:00.582467 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-09 03:39:00.582476 | orchestrator | Monday 09 February 2026 03:38:48 +0000 (0:00:01.198) 0:02:43.408 ******* 2026-02-09 03:39:00.582486 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:39:00.582496 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:39:00.582506 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:39:00.582515 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.582525 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.582535 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.582544 | orchestrator | 2026-02-09 03:39:00.582554 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-09 03:39:00.582576 | orchestrator | Monday 09 February 2026 03:38:49 +0000 (0:00:00.929) 0:02:44.337 ******* 2026-02-09 03:39:00.582587 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:39:00.582596 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:39:00.582606 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:39:00.582615 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.582625 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.582635 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.582644 | orchestrator | 2026-02-09 03:39:00.582654 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-09 03:39:00.582664 | orchestrator | Monday 09 February 2026 03:38:50 +0000 (0:00:00.657) 0:02:44.995 ******* 2026-02-09 03:39:00.582673 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:00.582683 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:00.582693 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:00.582702 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.582712 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.582740 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.582750 | orchestrator | 2026-02-09 03:39:00.582760 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-09 03:39:00.582770 | orchestrator | Monday 09 February 2026 03:38:51 +0000 (0:00:00.888) 0:02:45.884 ******* 2026-02-09 03:39:00.582780 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:00.582789 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:00.582799 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:00.582808 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.582818 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.582828 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.582846 | orchestrator | 2026-02-09 03:39:00.582855 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-09 03:39:00.582865 | orchestrator | Monday 09 February 2026 03:38:51 +0000 (0:00:00.640) 0:02:46.524 ******* 2026-02-09 03:39:00.582875 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:00.582888 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:00.582936 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:00.582953 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.582969 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.582985 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.582996 | orchestrator | 2026-02-09 03:39:00.583006 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-09 03:39:00.583016 | orchestrator | Monday 09 February 2026 03:38:52 +0000 (0:00:00.938) 0:02:47.462 ******* 2026-02-09 03:39:00.583026 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:00.583035 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:00.583045 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:00.583054 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.583064 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.583073 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.583083 | orchestrator | 2026-02-09 03:39:00.583093 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-09 03:39:00.583102 | orchestrator | Monday 09 February 2026 03:38:53 +0000 (0:00:00.642) 0:02:48.105 ******* 2026-02-09 03:39:00.583112 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:00.583121 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:00.583131 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:00.583140 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.583150 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.583159 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.583169 | orchestrator | 2026-02-09 03:39:00.583178 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-09 03:39:00.583188 | orchestrator | Monday 09 February 2026 03:38:54 +0000 (0:00:00.914) 0:02:49.020 ******* 2026-02-09 03:39:00.583198 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:00.583207 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:00.583217 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:00.583226 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.583236 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.583245 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.583255 | orchestrator | 2026-02-09 03:39:00.583265 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-09 03:39:00.583274 | orchestrator | Monday 09 February 2026 03:38:54 +0000 (0:00:00.608) 0:02:49.628 ******* 2026-02-09 03:39:00.583284 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.583294 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.583303 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.583313 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:39:00.583322 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:39:00.583332 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:39:00.583342 | orchestrator | 2026-02-09 03:39:00.583351 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-09 03:39:00.583361 | orchestrator | Monday 09 February 2026 03:38:57 +0000 (0:00:02.784) 0:02:52.413 ******* 2026-02-09 03:39:00.583371 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:39:00.583380 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:39:00.583390 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:39:00.583399 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.583409 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.583419 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.583428 | orchestrator | 2026-02-09 03:39:00.583438 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-09 03:39:00.583456 | orchestrator | Monday 09 February 2026 03:38:58 +0000 (0:00:00.648) 0:02:53.061 ******* 2026-02-09 03:39:00.583466 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:39:00.583475 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:39:00.583485 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:39:00.583494 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.583504 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.583513 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.583523 | orchestrator | 2026-02-09 03:39:00.583533 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-09 03:39:00.583542 | orchestrator | Monday 09 February 2026 03:38:59 +0000 (0:00:00.976) 0:02:54.038 ******* 2026-02-09 03:39:00.583552 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:00.583561 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:00.583577 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:00.583587 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:00.583597 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:00.583606 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:00.583616 | orchestrator | 2026-02-09 03:39:00.583626 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-09 03:39:00.583635 | orchestrator | Monday 09 February 2026 03:38:59 +0000 (0:00:00.734) 0:02:54.772 ******* 2026-02-09 03:39:00.583645 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-09 03:39:00.583655 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-09 03:39:00.583673 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-09 03:39:15.477641 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:15.477730 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:15.477740 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:15.477747 | orchestrator | 2026-02-09 03:39:15.477755 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-09 03:39:15.477763 | orchestrator | Monday 09 February 2026 03:39:00 +0000 (0:00:01.001) 0:02:55.773 ******* 2026-02-09 03:39:15.477772 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-09 03:39:15.477782 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-09 03:39:15.477789 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:15.477796 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-09 03:39:15.477802 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-09 03:39:15.477809 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:15.477817 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-09 03:39:15.477841 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-09 03:39:15.477848 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:15.477854 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:15.477860 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:15.477866 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:15.477873 | orchestrator | 2026-02-09 03:39:15.477879 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-09 03:39:15.477885 | orchestrator | Monday 09 February 2026 03:39:01 +0000 (0:00:00.759) 0:02:56.532 ******* 2026-02-09 03:39:15.477937 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:15.477942 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:15.477949 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:15.477955 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:15.477961 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:15.477967 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:15.477973 | orchestrator | 2026-02-09 03:39:15.477979 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-09 03:39:15.477985 | orchestrator | Monday 09 February 2026 03:39:02 +0000 (0:00:00.898) 0:02:57.431 ******* 2026-02-09 03:39:15.477991 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:15.477997 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:15.478003 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:15.478009 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:15.478070 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:15.478079 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:15.478086 | orchestrator | 2026-02-09 03:39:15.478093 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-09 03:39:15.478114 | orchestrator | Monday 09 February 2026 03:39:03 +0000 (0:00:00.660) 0:02:58.092 ******* 2026-02-09 03:39:15.478121 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:15.478127 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:15.478134 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:15.478141 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:15.478148 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:15.478154 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:15.478161 | orchestrator | 2026-02-09 03:39:15.478169 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-09 03:39:15.478176 | orchestrator | Monday 09 February 2026 03:39:04 +0000 (0:00:00.951) 0:02:59.043 ******* 2026-02-09 03:39:15.478183 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:15.478190 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:15.478196 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:15.478203 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:15.478209 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:15.478216 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:15.478223 | orchestrator | 2026-02-09 03:39:15.478247 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-09 03:39:15.478255 | orchestrator | Monday 09 February 2026 03:39:05 +0000 (0:00:00.900) 0:02:59.944 ******* 2026-02-09 03:39:15.478263 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:15.478270 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:15.478277 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:15.478285 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:15.478292 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:15.478299 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:15.478316 | orchestrator | 2026-02-09 03:39:15.478323 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-09 03:39:15.478331 | orchestrator | Monday 09 February 2026 03:39:05 +0000 (0:00:00.657) 0:03:00.601 ******* 2026-02-09 03:39:15.478338 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:39:15.478346 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:39:15.478353 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:39:15.478361 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:15.478368 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:15.478375 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:15.478382 | orchestrator | 2026-02-09 03:39:15.478390 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-09 03:39:15.478397 | orchestrator | Monday 09 February 2026 03:39:06 +0000 (0:00:00.867) 0:03:01.469 ******* 2026-02-09 03:39:15.478405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:39:15.478413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:39:15.478420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:39:15.478428 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:15.478436 | orchestrator | 2026-02-09 03:39:15.478443 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-09 03:39:15.478451 | orchestrator | Monday 09 February 2026 03:39:07 +0000 (0:00:00.460) 0:03:01.929 ******* 2026-02-09 03:39:15.478458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:39:15.478466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:39:15.478473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:39:15.478480 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:15.478488 | orchestrator | 2026-02-09 03:39:15.478495 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-09 03:39:15.478503 | orchestrator | Monday 09 February 2026 03:39:07 +0000 (0:00:00.460) 0:03:02.390 ******* 2026-02-09 03:39:15.478510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:39:15.478518 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:39:15.478525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:39:15.478532 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:15.478540 | orchestrator | 2026-02-09 03:39:15.478547 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-09 03:39:15.478554 | orchestrator | Monday 09 February 2026 03:39:07 +0000 (0:00:00.418) 0:03:02.808 ******* 2026-02-09 03:39:15.478562 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:39:15.478569 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:39:15.478577 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:39:15.478584 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:15.478592 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:15.478600 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:15.478607 | orchestrator | 2026-02-09 03:39:15.478615 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-09 03:39:15.478622 | orchestrator | Monday 09 February 2026 03:39:08 +0000 (0:00:00.676) 0:03:03.485 ******* 2026-02-09 03:39:15.478630 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-09 03:39:15.478637 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-09 03:39:15.478645 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-09 03:39:15.478652 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-09 03:39:15.478660 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:15.478667 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-09 03:39:15.478675 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:15.478682 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-09 03:39:15.478689 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:15.478696 | orchestrator | 2026-02-09 03:39:15.478703 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-09 03:39:15.478715 | orchestrator | Monday 09 February 2026 03:39:10 +0000 (0:00:01.900) 0:03:05.385 ******* 2026-02-09 03:39:15.478721 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:39:15.478727 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:39:15.478735 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:39:15.478742 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:39:15.478749 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:39:15.478757 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:39:15.478764 | orchestrator | 2026-02-09 03:39:15.478772 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-09 03:39:15.478779 | orchestrator | Monday 09 February 2026 03:39:13 +0000 (0:00:02.734) 0:03:08.120 ******* 2026-02-09 03:39:15.478787 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:39:15.478800 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:39:15.478807 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:39:15.478814 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:39:15.478821 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:39:15.478828 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:39:15.478835 | orchestrator | 2026-02-09 03:39:15.478843 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-09 03:39:15.478850 | orchestrator | Monday 09 February 2026 03:39:14 +0000 (0:00:01.000) 0:03:09.121 ******* 2026-02-09 03:39:15.478857 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:15.478865 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:15.478872 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:15.478880 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:39:15.478887 | orchestrator | 2026-02-09 03:39:15.478936 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-09 03:39:15.478949 | orchestrator | Monday 09 February 2026 03:39:15 +0000 (0:00:01.180) 0:03:10.301 ******* 2026-02-09 03:39:33.161369 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:33.161450 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:33.161458 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:33.161465 | orchestrator | 2026-02-09 03:39:33.161472 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-09 03:39:33.161478 | orchestrator | Monday 09 February 2026 03:39:15 +0000 (0:00:00.357) 0:03:10.659 ******* 2026-02-09 03:39:33.161484 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:39:33.161491 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:39:33.161497 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:39:33.161502 | orchestrator | 2026-02-09 03:39:33.161508 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-09 03:39:33.161514 | orchestrator | Monday 09 February 2026 03:39:17 +0000 (0:00:01.577) 0:03:12.236 ******* 2026-02-09 03:39:33.161519 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 03:39:33.161525 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 03:39:33.161531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 03:39:33.161536 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:33.161542 | orchestrator | 2026-02-09 03:39:33.161547 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-09 03:39:33.161553 | orchestrator | Monday 09 February 2026 03:39:18 +0000 (0:00:00.696) 0:03:12.932 ******* 2026-02-09 03:39:33.161558 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:33.161564 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:33.161570 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:33.161575 | orchestrator | 2026-02-09 03:39:33.161581 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-09 03:39:33.161586 | orchestrator | Monday 09 February 2026 03:39:18 +0000 (0:00:00.357) 0:03:13.290 ******* 2026-02-09 03:39:33.161592 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:33.161597 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:33.161602 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:33.161626 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:39:33.161632 | orchestrator | 2026-02-09 03:39:33.161638 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-09 03:39:33.161643 | orchestrator | Monday 09 February 2026 03:39:19 +0000 (0:00:01.156) 0:03:14.447 ******* 2026-02-09 03:39:33.161649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:39:33.161654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:39:33.161660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:39:33.161665 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.161670 | orchestrator | 2026-02-09 03:39:33.161676 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-09 03:39:33.161681 | orchestrator | Monday 09 February 2026 03:39:20 +0000 (0:00:00.467) 0:03:14.914 ******* 2026-02-09 03:39:33.161687 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.161692 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:33.161697 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:33.161703 | orchestrator | 2026-02-09 03:39:33.161708 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-09 03:39:33.161714 | orchestrator | Monday 09 February 2026 03:39:20 +0000 (0:00:00.365) 0:03:15.279 ******* 2026-02-09 03:39:33.161719 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.161724 | orchestrator | 2026-02-09 03:39:33.161730 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-09 03:39:33.161735 | orchestrator | Monday 09 February 2026 03:39:20 +0000 (0:00:00.255) 0:03:15.535 ******* 2026-02-09 03:39:33.161740 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.161746 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:33.161751 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:33.161756 | orchestrator | 2026-02-09 03:39:33.161764 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-09 03:39:33.161773 | orchestrator | Monday 09 February 2026 03:39:21 +0000 (0:00:00.343) 0:03:15.878 ******* 2026-02-09 03:39:33.161781 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.161790 | orchestrator | 2026-02-09 03:39:33.161799 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-09 03:39:33.161807 | orchestrator | Monday 09 February 2026 03:39:21 +0000 (0:00:00.784) 0:03:16.663 ******* 2026-02-09 03:39:33.161815 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.161824 | orchestrator | 2026-02-09 03:39:33.161833 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-09 03:39:33.161842 | orchestrator | Monday 09 February 2026 03:39:22 +0000 (0:00:00.310) 0:03:16.974 ******* 2026-02-09 03:39:33.161850 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.161859 | orchestrator | 2026-02-09 03:39:33.161867 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-09 03:39:33.161877 | orchestrator | Monday 09 February 2026 03:39:22 +0000 (0:00:00.159) 0:03:17.134 ******* 2026-02-09 03:39:33.161948 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.161957 | orchestrator | 2026-02-09 03:39:33.161963 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-09 03:39:33.161970 | orchestrator | Monday 09 February 2026 03:39:22 +0000 (0:00:00.243) 0:03:17.377 ******* 2026-02-09 03:39:33.161976 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.161983 | orchestrator | 2026-02-09 03:39:33.161990 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-09 03:39:33.161996 | orchestrator | Monday 09 February 2026 03:39:22 +0000 (0:00:00.256) 0:03:17.634 ******* 2026-02-09 03:39:33.162005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:39:33.162071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:39:33.162085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:39:33.162106 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.162115 | orchestrator | 2026-02-09 03:39:33.162124 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-09 03:39:33.162150 | orchestrator | Monday 09 February 2026 03:39:23 +0000 (0:00:00.417) 0:03:18.052 ******* 2026-02-09 03:39:33.162160 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.162169 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:33.162178 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:33.162187 | orchestrator | 2026-02-09 03:39:33.162197 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-09 03:39:33.162206 | orchestrator | Monday 09 February 2026 03:39:23 +0000 (0:00:00.357) 0:03:18.409 ******* 2026-02-09 03:39:33.162215 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.162224 | orchestrator | 2026-02-09 03:39:33.162233 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-09 03:39:33.162242 | orchestrator | Monday 09 February 2026 03:39:23 +0000 (0:00:00.259) 0:03:18.669 ******* 2026-02-09 03:39:33.162252 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.162262 | orchestrator | 2026-02-09 03:39:33.162269 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-09 03:39:33.162275 | orchestrator | Monday 09 February 2026 03:39:24 +0000 (0:00:00.250) 0:03:18.919 ******* 2026-02-09 03:39:33.162284 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:33.162293 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:33.162302 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:33.162311 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:39:33.162319 | orchestrator | 2026-02-09 03:39:33.162328 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-09 03:39:33.162334 | orchestrator | Monday 09 February 2026 03:39:25 +0000 (0:00:01.158) 0:03:20.078 ******* 2026-02-09 03:39:33.162339 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:39:33.162345 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:39:33.162350 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:39:33.162356 | orchestrator | 2026-02-09 03:39:33.162361 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-09 03:39:33.162367 | orchestrator | Monday 09 February 2026 03:39:25 +0000 (0:00:00.325) 0:03:20.404 ******* 2026-02-09 03:39:33.162372 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:39:33.162377 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:39:33.162383 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:39:33.162388 | orchestrator | 2026-02-09 03:39:33.162393 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-09 03:39:33.162399 | orchestrator | Monday 09 February 2026 03:39:27 +0000 (0:00:01.523) 0:03:21.927 ******* 2026-02-09 03:39:33.162407 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:39:33.162416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:39:33.162425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:39:33.162434 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.162443 | orchestrator | 2026-02-09 03:39:33.162451 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-09 03:39:33.162460 | orchestrator | Monday 09 February 2026 03:39:27 +0000 (0:00:00.662) 0:03:22.590 ******* 2026-02-09 03:39:33.162470 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:39:33.162478 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:39:33.162487 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:39:33.162495 | orchestrator | 2026-02-09 03:39:33.162500 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-09 03:39:33.162506 | orchestrator | Monday 09 February 2026 03:39:28 +0000 (0:00:00.399) 0:03:22.989 ******* 2026-02-09 03:39:33.162511 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:33.162516 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:33.162522 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:33.162533 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:39:33.162538 | orchestrator | 2026-02-09 03:39:33.162544 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-09 03:39:33.162549 | orchestrator | Monday 09 February 2026 03:39:29 +0000 (0:00:01.156) 0:03:24.146 ******* 2026-02-09 03:39:33.162554 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:39:33.162563 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:39:33.162571 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:39:33.162581 | orchestrator | 2026-02-09 03:39:33.162590 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-09 03:39:33.162599 | orchestrator | Monday 09 February 2026 03:39:29 +0000 (0:00:00.440) 0:03:24.586 ******* 2026-02-09 03:39:33.162608 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:39:33.162618 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:39:33.162624 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:39:33.162629 | orchestrator | 2026-02-09 03:39:33.162635 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-09 03:39:33.162640 | orchestrator | Monday 09 February 2026 03:39:30 +0000 (0:00:01.190) 0:03:25.777 ******* 2026-02-09 03:39:33.162645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:39:33.162651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:39:33.162661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:39:33.162666 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.162672 | orchestrator | 2026-02-09 03:39:33.162677 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-09 03:39:33.162683 | orchestrator | Monday 09 February 2026 03:39:31 +0000 (0:00:00.964) 0:03:26.741 ******* 2026-02-09 03:39:33.162688 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:39:33.162693 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:39:33.162699 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:39:33.162704 | orchestrator | 2026-02-09 03:39:33.162710 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-09 03:39:33.162715 | orchestrator | Monday 09 February 2026 03:39:32 +0000 (0:00:00.579) 0:03:27.321 ******* 2026-02-09 03:39:33.162720 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:33.162725 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:33.162731 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:33.162736 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:33.162742 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:33.162752 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.724838 | orchestrator | 2026-02-09 03:39:50.725037 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-09 03:39:50.725071 | orchestrator | Monday 09 February 2026 03:39:33 +0000 (0:00:00.673) 0:03:27.995 ******* 2026-02-09 03:39:50.725092 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:39:50.725111 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:39:50.725129 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:39:50.725149 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:39:50.725169 | orchestrator | 2026-02-09 03:39:50.725189 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-09 03:39:50.725208 | orchestrator | Monday 09 February 2026 03:39:34 +0000 (0:00:01.153) 0:03:29.149 ******* 2026-02-09 03:39:50.725227 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:50.725248 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:50.725267 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:50.725288 | orchestrator | 2026-02-09 03:39:50.725308 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-09 03:39:50.725327 | orchestrator | Monday 09 February 2026 03:39:34 +0000 (0:00:00.360) 0:03:29.509 ******* 2026-02-09 03:39:50.725343 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:39:50.725381 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:39:50.725392 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:39:50.725403 | orchestrator | 2026-02-09 03:39:50.725414 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-09 03:39:50.725425 | orchestrator | Monday 09 February 2026 03:39:35 +0000 (0:00:01.248) 0:03:30.758 ******* 2026-02-09 03:39:50.725437 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 03:39:50.725449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 03:39:50.725460 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 03:39:50.725470 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.725481 | orchestrator | 2026-02-09 03:39:50.725492 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-09 03:39:50.725503 | orchestrator | Monday 09 February 2026 03:39:37 +0000 (0:00:01.288) 0:03:32.047 ******* 2026-02-09 03:39:50.725514 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:50.725524 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:50.725535 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:50.725546 | orchestrator | 2026-02-09 03:39:50.725557 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-09 03:39:50.725567 | orchestrator | 2026-02-09 03:39:50.725578 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-09 03:39:50.725589 | orchestrator | Monday 09 February 2026 03:39:37 +0000 (0:00:00.623) 0:03:32.670 ******* 2026-02-09 03:39:50.725601 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:39:50.725613 | orchestrator | 2026-02-09 03:39:50.725623 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-09 03:39:50.725634 | orchestrator | Monday 09 February 2026 03:39:38 +0000 (0:00:00.766) 0:03:33.436 ******* 2026-02-09 03:39:50.725645 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:39:50.725656 | orchestrator | 2026-02-09 03:39:50.725666 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-09 03:39:50.725677 | orchestrator | Monday 09 February 2026 03:39:39 +0000 (0:00:00.579) 0:03:34.015 ******* 2026-02-09 03:39:50.725688 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:50.725698 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:50.725709 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:50.725720 | orchestrator | 2026-02-09 03:39:50.725730 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-09 03:39:50.725741 | orchestrator | Monday 09 February 2026 03:39:39 +0000 (0:00:00.816) 0:03:34.832 ******* 2026-02-09 03:39:50.725752 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.725763 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:50.725774 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.725784 | orchestrator | 2026-02-09 03:39:50.725795 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-09 03:39:50.725805 | orchestrator | Monday 09 February 2026 03:39:40 +0000 (0:00:00.637) 0:03:35.469 ******* 2026-02-09 03:39:50.725816 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.725827 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:50.725838 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.725848 | orchestrator | 2026-02-09 03:39:50.725859 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-09 03:39:50.725870 | orchestrator | Monday 09 February 2026 03:39:40 +0000 (0:00:00.350) 0:03:35.820 ******* 2026-02-09 03:39:50.725908 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.725920 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:50.725946 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.725957 | orchestrator | 2026-02-09 03:39:50.725968 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-09 03:39:50.725978 | orchestrator | Monday 09 February 2026 03:39:41 +0000 (0:00:00.367) 0:03:36.187 ******* 2026-02-09 03:39:50.725999 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:50.726010 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:50.726083 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:50.726095 | orchestrator | 2026-02-09 03:39:50.726106 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-09 03:39:50.726117 | orchestrator | Monday 09 February 2026 03:39:42 +0000 (0:00:00.754) 0:03:36.942 ******* 2026-02-09 03:39:50.726128 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.726138 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:50.726149 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.726160 | orchestrator | 2026-02-09 03:39:50.726170 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-09 03:39:50.726181 | orchestrator | Monday 09 February 2026 03:39:42 +0000 (0:00:00.622) 0:03:37.564 ******* 2026-02-09 03:39:50.726192 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.726224 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:50.726236 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.726247 | orchestrator | 2026-02-09 03:39:50.726257 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-09 03:39:50.726270 | orchestrator | Monday 09 February 2026 03:39:43 +0000 (0:00:00.387) 0:03:37.952 ******* 2026-02-09 03:39:50.726289 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:50.726309 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:50.726328 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:50.726346 | orchestrator | 2026-02-09 03:39:50.726366 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-09 03:39:50.726386 | orchestrator | Monday 09 February 2026 03:39:43 +0000 (0:00:00.792) 0:03:38.745 ******* 2026-02-09 03:39:50.726406 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:50.726426 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:50.726445 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:50.726464 | orchestrator | 2026-02-09 03:39:50.726484 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-09 03:39:50.726504 | orchestrator | Monday 09 February 2026 03:39:44 +0000 (0:00:00.763) 0:03:39.508 ******* 2026-02-09 03:39:50.726524 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.726544 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:50.726557 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.726568 | orchestrator | 2026-02-09 03:39:50.726578 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-09 03:39:50.726589 | orchestrator | Monday 09 February 2026 03:39:45 +0000 (0:00:00.609) 0:03:40.118 ******* 2026-02-09 03:39:50.726600 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:50.726611 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:50.726621 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:50.726632 | orchestrator | 2026-02-09 03:39:50.726643 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-09 03:39:50.726654 | orchestrator | Monday 09 February 2026 03:39:45 +0000 (0:00:00.368) 0:03:40.487 ******* 2026-02-09 03:39:50.726664 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.726675 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:50.726686 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.726696 | orchestrator | 2026-02-09 03:39:50.726707 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-09 03:39:50.726718 | orchestrator | Monday 09 February 2026 03:39:45 +0000 (0:00:00.338) 0:03:40.825 ******* 2026-02-09 03:39:50.726729 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.726740 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:50.726750 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.726761 | orchestrator | 2026-02-09 03:39:50.726772 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-09 03:39:50.726782 | orchestrator | Monday 09 February 2026 03:39:46 +0000 (0:00:00.367) 0:03:41.193 ******* 2026-02-09 03:39:50.726793 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.726815 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:50.726826 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.726836 | orchestrator | 2026-02-09 03:39:50.726847 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-09 03:39:50.726858 | orchestrator | Monday 09 February 2026 03:39:46 +0000 (0:00:00.619) 0:03:41.813 ******* 2026-02-09 03:39:50.726869 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.726901 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:50.726913 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.726923 | orchestrator | 2026-02-09 03:39:50.726934 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-09 03:39:50.726952 | orchestrator | Monday 09 February 2026 03:39:47 +0000 (0:00:00.339) 0:03:42.152 ******* 2026-02-09 03:39:50.726970 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.727001 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:39:50.727021 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:39:50.727040 | orchestrator | 2026-02-09 03:39:50.727058 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-09 03:39:50.727074 | orchestrator | Monday 09 February 2026 03:39:47 +0000 (0:00:00.331) 0:03:42.484 ******* 2026-02-09 03:39:50.727092 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:50.727110 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:50.727129 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:50.727146 | orchestrator | 2026-02-09 03:39:50.727165 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-09 03:39:50.727183 | orchestrator | Monday 09 February 2026 03:39:48 +0000 (0:00:00.366) 0:03:42.850 ******* 2026-02-09 03:39:50.727201 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:50.727219 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:50.727237 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:50.727255 | orchestrator | 2026-02-09 03:39:50.727273 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-09 03:39:50.727292 | orchestrator | Monday 09 February 2026 03:39:48 +0000 (0:00:00.623) 0:03:43.474 ******* 2026-02-09 03:39:50.727310 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:50.727328 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:50.727346 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:50.727364 | orchestrator | 2026-02-09 03:39:50.727393 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-09 03:39:50.727412 | orchestrator | Monday 09 February 2026 03:39:49 +0000 (0:00:00.609) 0:03:44.083 ******* 2026-02-09 03:39:50.727431 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:39:50.727448 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:39:50.727467 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:39:50.727485 | orchestrator | 2026-02-09 03:39:50.727503 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-09 03:39:50.727522 | orchestrator | Monday 09 February 2026 03:39:49 +0000 (0:00:00.387) 0:03:44.471 ******* 2026-02-09 03:39:50.727540 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:39:50.727559 | orchestrator | 2026-02-09 03:39:50.727576 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-09 03:39:50.727596 | orchestrator | Monday 09 February 2026 03:39:50 +0000 (0:00:00.907) 0:03:45.379 ******* 2026-02-09 03:39:50.727614 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:39:50.727632 | orchestrator | 2026-02-09 03:39:50.727668 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-09 03:40:41.282462 | orchestrator | Monday 09 February 2026 03:39:50 +0000 (0:00:00.169) 0:03:45.549 ******* 2026-02-09 03:40:41.282563 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-09 03:40:41.282580 | orchestrator | 2026-02-09 03:40:41.282592 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-09 03:40:41.282603 | orchestrator | Monday 09 February 2026 03:39:51 +0000 (0:00:01.096) 0:03:46.645 ******* 2026-02-09 03:40:41.282636 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:41.282648 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:41.282658 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:41.282669 | orchestrator | 2026-02-09 03:40:41.282679 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-09 03:40:41.282689 | orchestrator | Monday 09 February 2026 03:39:52 +0000 (0:00:00.394) 0:03:47.040 ******* 2026-02-09 03:40:41.282700 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:41.282710 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:41.282720 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:41.282730 | orchestrator | 2026-02-09 03:40:41.282741 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-09 03:40:41.282751 | orchestrator | Monday 09 February 2026 03:39:52 +0000 (0:00:00.683) 0:03:47.723 ******* 2026-02-09 03:40:41.282762 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:40:41.282773 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:40:41.282783 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:40:41.282793 | orchestrator | 2026-02-09 03:40:41.282803 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-09 03:40:41.282814 | orchestrator | Monday 09 February 2026 03:39:54 +0000 (0:00:01.191) 0:03:48.915 ******* 2026-02-09 03:40:41.282824 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:40:41.282834 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:40:41.282845 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:40:41.282855 | orchestrator | 2026-02-09 03:40:41.282911 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-09 03:40:41.282922 | orchestrator | Monday 09 February 2026 03:39:54 +0000 (0:00:00.818) 0:03:49.733 ******* 2026-02-09 03:40:41.282932 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:40:41.282941 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:40:41.282951 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:40:41.282960 | orchestrator | 2026-02-09 03:40:41.282970 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-09 03:40:41.282979 | orchestrator | Monday 09 February 2026 03:39:55 +0000 (0:00:00.663) 0:03:50.397 ******* 2026-02-09 03:40:41.282991 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:41.283002 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:41.283013 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:41.283024 | orchestrator | 2026-02-09 03:40:41.283035 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-09 03:40:41.283046 | orchestrator | Monday 09 February 2026 03:39:56 +0000 (0:00:01.007) 0:03:51.404 ******* 2026-02-09 03:40:41.283057 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:40:41.283069 | orchestrator | 2026-02-09 03:40:41.283079 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-09 03:40:41.283092 | orchestrator | Monday 09 February 2026 03:39:57 +0000 (0:00:01.262) 0:03:52.667 ******* 2026-02-09 03:40:41.283104 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:41.283115 | orchestrator | 2026-02-09 03:40:41.283125 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-09 03:40:41.283136 | orchestrator | Monday 09 February 2026 03:39:58 +0000 (0:00:00.744) 0:03:53.411 ******* 2026-02-09 03:40:41.283148 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-09 03:40:41.283159 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:40:41.283171 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:40:41.283181 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-09 03:40:41.283193 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-09 03:40:41.283205 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-09 03:40:41.283216 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-09 03:40:41.283227 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-09 03:40:41.283236 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-09 03:40:41.283257 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-09 03:40:41.283266 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-09 03:40:41.283276 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-09 03:40:41.283285 | orchestrator | 2026-02-09 03:40:41.283295 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-09 03:40:41.283304 | orchestrator | Monday 09 February 2026 03:40:01 +0000 (0:00:03.027) 0:03:56.439 ******* 2026-02-09 03:40:41.283314 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:40:41.283323 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:40:41.283347 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:40:41.283357 | orchestrator | 2026-02-09 03:40:41.283366 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-09 03:40:41.283376 | orchestrator | Monday 09 February 2026 03:40:02 +0000 (0:00:01.237) 0:03:57.676 ******* 2026-02-09 03:40:41.283386 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:41.283395 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:41.283405 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:41.283414 | orchestrator | 2026-02-09 03:40:41.283424 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-09 03:40:41.283433 | orchestrator | Monday 09 February 2026 03:40:03 +0000 (0:00:00.656) 0:03:58.332 ******* 2026-02-09 03:40:41.283442 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:41.283452 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:41.283461 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:41.283470 | orchestrator | 2026-02-09 03:40:41.283480 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-09 03:40:41.283489 | orchestrator | Monday 09 February 2026 03:40:03 +0000 (0:00:00.351) 0:03:58.684 ******* 2026-02-09 03:40:41.283499 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:40:41.283508 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:40:41.283533 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:40:41.283543 | orchestrator | 2026-02-09 03:40:41.283552 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-09 03:40:41.283562 | orchestrator | Monday 09 February 2026 03:40:05 +0000 (0:00:01.455) 0:04:00.139 ******* 2026-02-09 03:40:41.283571 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:40:41.283581 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:40:41.283590 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:40:41.283600 | orchestrator | 2026-02-09 03:40:41.283609 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-09 03:40:41.283619 | orchestrator | Monday 09 February 2026 03:40:06 +0000 (0:00:01.304) 0:04:01.444 ******* 2026-02-09 03:40:41.283628 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:41.283637 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:41.283647 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:41.283656 | orchestrator | 2026-02-09 03:40:41.283666 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-09 03:40:41.283675 | orchestrator | Monday 09 February 2026 03:40:07 +0000 (0:00:00.647) 0:04:02.092 ******* 2026-02-09 03:40:41.283685 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:40:41.283695 | orchestrator | 2026-02-09 03:40:41.283705 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-09 03:40:41.283714 | orchestrator | Monday 09 February 2026 03:40:07 +0000 (0:00:00.611) 0:04:02.703 ******* 2026-02-09 03:40:41.283724 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:41.283733 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:41.283743 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:41.283752 | orchestrator | 2026-02-09 03:40:41.283761 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-09 03:40:41.283771 | orchestrator | Monday 09 February 2026 03:40:08 +0000 (0:00:00.320) 0:04:03.024 ******* 2026-02-09 03:40:41.283780 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:41.283800 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:41.283809 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:41.283819 | orchestrator | 2026-02-09 03:40:41.283828 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-09 03:40:41.283838 | orchestrator | Monday 09 February 2026 03:40:08 +0000 (0:00:00.596) 0:04:03.620 ******* 2026-02-09 03:40:41.283847 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:40:41.283858 | orchestrator | 2026-02-09 03:40:41.283894 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-09 03:40:41.283904 | orchestrator | Monday 09 February 2026 03:40:09 +0000 (0:00:00.590) 0:04:04.210 ******* 2026-02-09 03:40:41.283914 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:40:41.283923 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:40:41.283933 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:40:41.283942 | orchestrator | 2026-02-09 03:40:41.283952 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-09 03:40:41.283961 | orchestrator | Monday 09 February 2026 03:40:11 +0000 (0:00:01.846) 0:04:06.057 ******* 2026-02-09 03:40:41.283970 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:40:41.283980 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:40:41.283989 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:40:41.283999 | orchestrator | 2026-02-09 03:40:41.284008 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-09 03:40:41.284018 | orchestrator | Monday 09 February 2026 03:40:12 +0000 (0:00:01.463) 0:04:07.520 ******* 2026-02-09 03:40:41.284027 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:40:41.284036 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:40:41.284046 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:40:41.284055 | orchestrator | 2026-02-09 03:40:41.284064 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-09 03:40:41.284074 | orchestrator | Monday 09 February 2026 03:40:14 +0000 (0:00:01.742) 0:04:09.263 ******* 2026-02-09 03:40:41.284083 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:40:41.284093 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:40:41.284102 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:40:41.284111 | orchestrator | 2026-02-09 03:40:41.284121 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-09 03:40:41.284130 | orchestrator | Monday 09 February 2026 03:40:16 +0000 (0:00:01.927) 0:04:11.191 ******* 2026-02-09 03:40:41.284140 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:40:41.284149 | orchestrator | 2026-02-09 03:40:41.284159 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-09 03:40:41.284168 | orchestrator | Monday 09 February 2026 03:40:17 +0000 (0:00:00.916) 0:04:12.107 ******* 2026-02-09 03:40:41.284177 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:41.284187 | orchestrator | 2026-02-09 03:40:41.284202 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-09 03:40:41.284211 | orchestrator | Monday 09 February 2026 03:40:18 +0000 (0:00:01.197) 0:04:13.305 ******* 2026-02-09 03:40:41.284221 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:41.284231 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:41.284240 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:41.284249 | orchestrator | 2026-02-09 03:40:41.284259 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-09 03:40:41.284271 | orchestrator | Monday 09 February 2026 03:40:27 +0000 (0:00:08.691) 0:04:21.997 ******* 2026-02-09 03:40:41.284286 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:41.284302 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:41.284318 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:41.284334 | orchestrator | 2026-02-09 03:40:41.284348 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-09 03:40:41.284358 | orchestrator | Monday 09 February 2026 03:40:27 +0000 (0:00:00.363) 0:04:22.360 ******* 2026-02-09 03:40:41.284387 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62df7b744f88cb9fb3200fa222360c713ffd8598'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-09 03:40:53.826948 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62df7b744f88cb9fb3200fa222360c713ffd8598'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-09 03:40:53.827090 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62df7b744f88cb9fb3200fa222360c713ffd8598'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-09 03:40:53.827118 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62df7b744f88cb9fb3200fa222360c713ffd8598'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-09 03:40:53.827138 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62df7b744f88cb9fb3200fa222360c713ffd8598'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-09 03:40:53.827157 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62df7b744f88cb9fb3200fa222360c713ffd8598'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__62df7b744f88cb9fb3200fa222360c713ffd8598'}])  2026-02-09 03:40:53.827179 | orchestrator | 2026-02-09 03:40:53.827200 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-09 03:40:53.827221 | orchestrator | Monday 09 February 2026 03:40:41 +0000 (0:00:13.751) 0:04:36.111 ******* 2026-02-09 03:40:53.827240 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.827260 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:53.827278 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:53.827297 | orchestrator | 2026-02-09 03:40:53.827316 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-09 03:40:53.827336 | orchestrator | Monday 09 February 2026 03:40:41 +0000 (0:00:00.386) 0:04:36.498 ******* 2026-02-09 03:40:53.827355 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:40:53.827375 | orchestrator | 2026-02-09 03:40:53.827396 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-09 03:40:53.827417 | orchestrator | Monday 09 February 2026 03:40:42 +0000 (0:00:00.821) 0:04:37.319 ******* 2026-02-09 03:40:53.827438 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:53.827459 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:53.827480 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:53.827502 | orchestrator | 2026-02-09 03:40:53.827524 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-09 03:40:53.827543 | orchestrator | Monday 09 February 2026 03:40:42 +0000 (0:00:00.369) 0:04:37.689 ******* 2026-02-09 03:40:53.827638 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.827662 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:53.827681 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:53.827700 | orchestrator | 2026-02-09 03:40:53.827737 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-09 03:40:53.827757 | orchestrator | Monday 09 February 2026 03:40:43 +0000 (0:00:00.386) 0:04:38.076 ******* 2026-02-09 03:40:53.827776 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 03:40:53.827795 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 03:40:53.827813 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 03:40:53.827831 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.827849 | orchestrator | 2026-02-09 03:40:53.827896 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-09 03:40:53.827916 | orchestrator | Monday 09 February 2026 03:40:44 +0000 (0:00:00.956) 0:04:39.032 ******* 2026-02-09 03:40:53.827934 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:53.827953 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:53.827973 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:53.827991 | orchestrator | 2026-02-09 03:40:53.828011 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-09 03:40:53.828030 | orchestrator | 2026-02-09 03:40:53.828050 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-09 03:40:53.828070 | orchestrator | Monday 09 February 2026 03:40:45 +0000 (0:00:00.903) 0:04:39.936 ******* 2026-02-09 03:40:53.828090 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:40:53.828110 | orchestrator | 2026-02-09 03:40:53.828157 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-09 03:40:53.828178 | orchestrator | Monday 09 February 2026 03:40:45 +0000 (0:00:00.590) 0:04:40.527 ******* 2026-02-09 03:40:53.828197 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:40:53.828215 | orchestrator | 2026-02-09 03:40:53.828234 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-09 03:40:53.828253 | orchestrator | Monday 09 February 2026 03:40:46 +0000 (0:00:00.808) 0:04:41.335 ******* 2026-02-09 03:40:53.828272 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:53.828291 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:53.828310 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:53.828330 | orchestrator | 2026-02-09 03:40:53.828349 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-09 03:40:53.828370 | orchestrator | Monday 09 February 2026 03:40:47 +0000 (0:00:00.743) 0:04:42.079 ******* 2026-02-09 03:40:53.828389 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.828409 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:53.828428 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:53.828447 | orchestrator | 2026-02-09 03:40:53.828466 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-09 03:40:53.828485 | orchestrator | Monday 09 February 2026 03:40:47 +0000 (0:00:00.346) 0:04:42.425 ******* 2026-02-09 03:40:53.828506 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.828525 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:53.828544 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:53.828563 | orchestrator | 2026-02-09 03:40:53.828583 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-09 03:40:53.828603 | orchestrator | Monday 09 February 2026 03:40:48 +0000 (0:00:00.596) 0:04:43.022 ******* 2026-02-09 03:40:53.828622 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.828642 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:53.828660 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:53.828679 | orchestrator | 2026-02-09 03:40:53.828700 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-09 03:40:53.828741 | orchestrator | Monday 09 February 2026 03:40:48 +0000 (0:00:00.369) 0:04:43.391 ******* 2026-02-09 03:40:53.828761 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:53.828780 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:53.828799 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:53.828819 | orchestrator | 2026-02-09 03:40:53.828839 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-09 03:40:53.828919 | orchestrator | Monday 09 February 2026 03:40:49 +0000 (0:00:00.749) 0:04:44.141 ******* 2026-02-09 03:40:53.828942 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.828959 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:53.828977 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:53.828997 | orchestrator | 2026-02-09 03:40:53.829016 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-09 03:40:53.829034 | orchestrator | Monday 09 February 2026 03:40:49 +0000 (0:00:00.326) 0:04:44.468 ******* 2026-02-09 03:40:53.829053 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.829071 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:53.829090 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:53.829108 | orchestrator | 2026-02-09 03:40:53.829125 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-09 03:40:53.829144 | orchestrator | Monday 09 February 2026 03:40:50 +0000 (0:00:00.597) 0:04:45.066 ******* 2026-02-09 03:40:53.829163 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:53.829181 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:53.829200 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:53.829219 | orchestrator | 2026-02-09 03:40:53.829238 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-09 03:40:53.829252 | orchestrator | Monday 09 February 2026 03:40:50 +0000 (0:00:00.745) 0:04:45.812 ******* 2026-02-09 03:40:53.829263 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:53.829274 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:53.829285 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:53.829295 | orchestrator | 2026-02-09 03:40:53.829307 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-09 03:40:53.829318 | orchestrator | Monday 09 February 2026 03:40:51 +0000 (0:00:00.734) 0:04:46.546 ******* 2026-02-09 03:40:53.829329 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.829340 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:53.829351 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:53.829362 | orchestrator | 2026-02-09 03:40:53.829384 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-09 03:40:53.829395 | orchestrator | Monday 09 February 2026 03:40:52 +0000 (0:00:00.393) 0:04:46.940 ******* 2026-02-09 03:40:53.829406 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:40:53.829416 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:40:53.829427 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:40:53.829438 | orchestrator | 2026-02-09 03:40:53.829449 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-09 03:40:53.829459 | orchestrator | Monday 09 February 2026 03:40:52 +0000 (0:00:00.683) 0:04:47.624 ******* 2026-02-09 03:40:53.829470 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.829481 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:53.829492 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:53.829503 | orchestrator | 2026-02-09 03:40:53.829514 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-09 03:40:53.829524 | orchestrator | Monday 09 February 2026 03:40:53 +0000 (0:00:00.366) 0:04:47.991 ******* 2026-02-09 03:40:53.829535 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.829546 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:53.829557 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:53.829568 | orchestrator | 2026-02-09 03:40:53.829578 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-09 03:40:53.829589 | orchestrator | Monday 09 February 2026 03:40:53 +0000 (0:00:00.319) 0:04:48.310 ******* 2026-02-09 03:40:53.829613 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:40:53.829624 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:40:53.829635 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:40:53.829646 | orchestrator | 2026-02-09 03:40:53.829673 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-09 03:41:48.469696 | orchestrator | Monday 09 February 2026 03:40:53 +0000 (0:00:00.347) 0:04:48.657 ******* 2026-02-09 03:41:48.469790 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:41:48.469801 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:41:48.469808 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:41:48.469815 | orchestrator | 2026-02-09 03:41:48.469829 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-09 03:41:48.469836 | orchestrator | Monday 09 February 2026 03:40:54 +0000 (0:00:00.619) 0:04:49.277 ******* 2026-02-09 03:41:48.469879 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:41:48.469886 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:41:48.469892 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:41:48.469899 | orchestrator | 2026-02-09 03:41:48.469905 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-09 03:41:48.469912 | orchestrator | Monday 09 February 2026 03:40:54 +0000 (0:00:00.334) 0:04:49.611 ******* 2026-02-09 03:41:48.469919 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:41:48.469926 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:41:48.469932 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:41:48.469939 | orchestrator | 2026-02-09 03:41:48.469945 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-09 03:41:48.469952 | orchestrator | Monday 09 February 2026 03:40:55 +0000 (0:00:00.375) 0:04:49.987 ******* 2026-02-09 03:41:48.469958 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:41:48.469965 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:41:48.469971 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:41:48.469978 | orchestrator | 2026-02-09 03:41:48.469984 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-09 03:41:48.469990 | orchestrator | Monday 09 February 2026 03:40:55 +0000 (0:00:00.368) 0:04:50.355 ******* 2026-02-09 03:41:48.469997 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:41:48.470003 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:41:48.470009 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:41:48.470068 | orchestrator | 2026-02-09 03:41:48.470084 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-09 03:41:48.470096 | orchestrator | Monday 09 February 2026 03:40:56 +0000 (0:00:00.897) 0:04:51.252 ******* 2026-02-09 03:41:48.470106 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 03:41:48.470117 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 03:41:48.470129 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 03:41:48.470139 | orchestrator | 2026-02-09 03:41:48.470149 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-09 03:41:48.470160 | orchestrator | Monday 09 February 2026 03:40:57 +0000 (0:00:00.671) 0:04:51.923 ******* 2026-02-09 03:41:48.470169 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:41:48.470176 | orchestrator | 2026-02-09 03:41:48.470182 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-09 03:41:48.470189 | orchestrator | Monday 09 February 2026 03:40:57 +0000 (0:00:00.794) 0:04:52.717 ******* 2026-02-09 03:41:48.470195 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:41:48.470201 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:41:48.470208 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:41:48.470214 | orchestrator | 2026-02-09 03:41:48.470220 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-09 03:41:48.470226 | orchestrator | Monday 09 February 2026 03:40:58 +0000 (0:00:00.785) 0:04:53.502 ******* 2026-02-09 03:41:48.470253 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:41:48.470261 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:41:48.470269 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:41:48.470276 | orchestrator | 2026-02-09 03:41:48.470284 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-09 03:41:48.470292 | orchestrator | Monday 09 February 2026 03:40:59 +0000 (0:00:00.365) 0:04:53.868 ******* 2026-02-09 03:41:48.470300 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-09 03:41:48.470308 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-09 03:41:48.470315 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-09 03:41:48.470323 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-09 03:41:48.470330 | orchestrator | 2026-02-09 03:41:48.470337 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-09 03:41:48.470356 | orchestrator | Monday 09 February 2026 03:41:09 +0000 (0:00:10.003) 0:05:03.871 ******* 2026-02-09 03:41:48.470364 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:41:48.470372 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:41:48.470379 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:41:48.470387 | orchestrator | 2026-02-09 03:41:48.470394 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-09 03:41:48.470402 | orchestrator | Monday 09 February 2026 03:41:09 +0000 (0:00:00.410) 0:05:04.281 ******* 2026-02-09 03:41:48.470409 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-09 03:41:48.470417 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-09 03:41:48.470425 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-09 03:41:48.470432 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-09 03:41:48.470440 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:41:48.470448 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:41:48.470455 | orchestrator | 2026-02-09 03:41:48.470462 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-09 03:41:48.470470 | orchestrator | Monday 09 February 2026 03:41:11 +0000 (0:00:02.496) 0:05:06.778 ******* 2026-02-09 03:41:48.470477 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-09 03:41:48.470484 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-09 03:41:48.470491 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-09 03:41:48.470499 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-09 03:41:48.470506 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-09 03:41:48.470528 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-09 03:41:48.470535 | orchestrator | 2026-02-09 03:41:48.470542 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-09 03:41:48.470550 | orchestrator | Monday 09 February 2026 03:41:13 +0000 (0:00:01.260) 0:05:08.039 ******* 2026-02-09 03:41:48.470558 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:41:48.470564 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:41:48.470572 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:41:48.470579 | orchestrator | 2026-02-09 03:41:48.470586 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-09 03:41:48.470593 | orchestrator | Monday 09 February 2026 03:41:13 +0000 (0:00:00.692) 0:05:08.732 ******* 2026-02-09 03:41:48.470601 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:41:48.470609 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:41:48.470616 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:41:48.470623 | orchestrator | 2026-02-09 03:41:48.470631 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-09 03:41:48.470638 | orchestrator | Monday 09 February 2026 03:41:14 +0000 (0:00:00.340) 0:05:09.072 ******* 2026-02-09 03:41:48.470646 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:41:48.470653 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:41:48.470665 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:41:48.470671 | orchestrator | 2026-02-09 03:41:48.470678 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-09 03:41:48.470684 | orchestrator | Monday 09 February 2026 03:41:14 +0000 (0:00:00.602) 0:05:09.675 ******* 2026-02-09 03:41:48.470690 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:41:48.470697 | orchestrator | 2026-02-09 03:41:48.470703 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-09 03:41:48.470709 | orchestrator | Monday 09 February 2026 03:41:15 +0000 (0:00:00.633) 0:05:10.309 ******* 2026-02-09 03:41:48.470716 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:41:48.470722 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:41:48.470728 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:41:48.470734 | orchestrator | 2026-02-09 03:41:48.470740 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-09 03:41:48.470747 | orchestrator | Monday 09 February 2026 03:41:15 +0000 (0:00:00.363) 0:05:10.672 ******* 2026-02-09 03:41:48.470753 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:41:48.470759 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:41:48.470765 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:41:48.470771 | orchestrator | 2026-02-09 03:41:48.470777 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-09 03:41:48.470784 | orchestrator | Monday 09 February 2026 03:41:16 +0000 (0:00:00.704) 0:05:11.376 ******* 2026-02-09 03:41:48.470790 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:41:48.470796 | orchestrator | 2026-02-09 03:41:48.470802 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-09 03:41:48.470808 | orchestrator | Monday 09 February 2026 03:41:17 +0000 (0:00:00.636) 0:05:12.012 ******* 2026-02-09 03:41:48.470815 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:41:48.470821 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:41:48.470827 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:41:48.470833 | orchestrator | 2026-02-09 03:41:48.470879 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-09 03:41:48.470888 | orchestrator | Monday 09 February 2026 03:41:18 +0000 (0:00:01.222) 0:05:13.235 ******* 2026-02-09 03:41:48.470895 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:41:48.470901 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:41:48.470907 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:41:48.470913 | orchestrator | 2026-02-09 03:41:48.470919 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-09 03:41:48.470926 | orchestrator | Monday 09 February 2026 03:41:19 +0000 (0:00:01.517) 0:05:14.753 ******* 2026-02-09 03:41:48.470932 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:41:48.470938 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:41:48.470944 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:41:48.470950 | orchestrator | 2026-02-09 03:41:48.470957 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-09 03:41:48.470963 | orchestrator | Monday 09 February 2026 03:41:21 +0000 (0:00:01.757) 0:05:16.510 ******* 2026-02-09 03:41:48.470969 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:41:48.470980 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:41:48.470987 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:41:48.470993 | orchestrator | 2026-02-09 03:41:48.470999 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-09 03:41:48.471005 | orchestrator | Monday 09 February 2026 03:41:23 +0000 (0:00:01.872) 0:05:18.383 ******* 2026-02-09 03:41:48.471011 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:41:48.471017 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:41:48.471024 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-09 03:41:48.471030 | orchestrator | 2026-02-09 03:41:48.471040 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-09 03:41:48.471062 | orchestrator | Monday 09 February 2026 03:41:24 +0000 (0:00:00.791) 0:05:19.174 ******* 2026-02-09 03:41:48.471075 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-09 03:41:48.471086 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-09 03:41:48.471096 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-09 03:41:48.471106 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-09 03:41:48.471117 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-09 03:41:48.471127 | orchestrator | 2026-02-09 03:41:48.471144 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-09 03:42:16.742481 | orchestrator | Monday 09 February 2026 03:41:48 +0000 (0:00:24.114) 0:05:43.289 ******* 2026-02-09 03:42:16.742568 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-09 03:42:16.742580 | orchestrator | 2026-02-09 03:42:16.742588 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-09 03:42:16.742596 | orchestrator | Monday 09 February 2026 03:41:49 +0000 (0:00:01.230) 0:05:44.520 ******* 2026-02-09 03:42:16.742672 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:42:16.742678 | orchestrator | 2026-02-09 03:42:16.742682 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-09 03:42:16.742687 | orchestrator | Monday 09 February 2026 03:41:50 +0000 (0:00:00.394) 0:05:44.914 ******* 2026-02-09 03:42:16.742691 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:42:16.742695 | orchestrator | 2026-02-09 03:42:16.742699 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-09 03:42:16.742704 | orchestrator | Monday 09 February 2026 03:41:50 +0000 (0:00:00.186) 0:05:45.101 ******* 2026-02-09 03:42:16.742708 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-09 03:42:16.742713 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-09 03:42:16.742719 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-09 03:42:16.742726 | orchestrator | 2026-02-09 03:42:16.742732 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-09 03:42:16.742738 | orchestrator | Monday 09 February 2026 03:41:56 +0000 (0:00:06.352) 0:05:51.453 ******* 2026-02-09 03:42:16.742744 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-09 03:42:16.742750 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-09 03:42:16.742756 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-09 03:42:16.742763 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-09 03:42:16.742769 | orchestrator | 2026-02-09 03:42:16.742776 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-09 03:42:16.742782 | orchestrator | Monday 09 February 2026 03:42:01 +0000 (0:00:05.103) 0:05:56.557 ******* 2026-02-09 03:42:16.742789 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:42:16.742796 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:42:16.742802 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:42:16.742809 | orchestrator | 2026-02-09 03:42:16.742816 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-09 03:42:16.742823 | orchestrator | Monday 09 February 2026 03:42:02 +0000 (0:00:00.667) 0:05:57.224 ******* 2026-02-09 03:42:16.742829 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:42:16.742869 | orchestrator | 2026-02-09 03:42:16.742873 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-09 03:42:16.742877 | orchestrator | Monday 09 February 2026 03:42:02 +0000 (0:00:00.563) 0:05:57.788 ******* 2026-02-09 03:42:16.742901 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:42:16.742905 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:42:16.742909 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:42:16.742913 | orchestrator | 2026-02-09 03:42:16.742917 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-09 03:42:16.742921 | orchestrator | Monday 09 February 2026 03:42:03 +0000 (0:00:00.674) 0:05:58.463 ******* 2026-02-09 03:42:16.742925 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:42:16.742928 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:42:16.742932 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:42:16.742936 | orchestrator | 2026-02-09 03:42:16.742940 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-09 03:42:16.742944 | orchestrator | Monday 09 February 2026 03:42:04 +0000 (0:00:01.180) 0:05:59.643 ******* 2026-02-09 03:42:16.742948 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 03:42:16.742952 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 03:42:16.742955 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 03:42:16.742959 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:42:16.742963 | orchestrator | 2026-02-09 03:42:16.742978 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-09 03:42:16.742981 | orchestrator | Monday 09 February 2026 03:42:05 +0000 (0:00:00.760) 0:06:00.404 ******* 2026-02-09 03:42:16.742985 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:42:16.742989 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:42:16.742993 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:42:16.742996 | orchestrator | 2026-02-09 03:42:16.743000 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-09 03:42:16.743004 | orchestrator | 2026-02-09 03:42:16.743008 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-09 03:42:16.743012 | orchestrator | Monday 09 February 2026 03:42:06 +0000 (0:00:00.589) 0:06:00.994 ******* 2026-02-09 03:42:16.743016 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:42:16.743021 | orchestrator | 2026-02-09 03:42:16.743025 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-09 03:42:16.743029 | orchestrator | Monday 09 February 2026 03:42:07 +0000 (0:00:00.909) 0:06:01.903 ******* 2026-02-09 03:42:16.743033 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:42:16.743037 | orchestrator | 2026-02-09 03:42:16.743041 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-09 03:42:16.743045 | orchestrator | Monday 09 February 2026 03:42:07 +0000 (0:00:00.817) 0:06:02.721 ******* 2026-02-09 03:42:16.743050 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:42:16.743054 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:42:16.743073 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:42:16.743078 | orchestrator | 2026-02-09 03:42:16.743082 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-09 03:42:16.743086 | orchestrator | Monday 09 February 2026 03:42:08 +0000 (0:00:00.358) 0:06:03.079 ******* 2026-02-09 03:42:16.743091 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:42:16.743095 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:42:16.743099 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:42:16.743104 | orchestrator | 2026-02-09 03:42:16.743108 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-09 03:42:16.743112 | orchestrator | Monday 09 February 2026 03:42:08 +0000 (0:00:00.682) 0:06:03.762 ******* 2026-02-09 03:42:16.743117 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:42:16.743121 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:42:16.743126 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:42:16.743130 | orchestrator | 2026-02-09 03:42:16.743134 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-09 03:42:16.743144 | orchestrator | Monday 09 February 2026 03:42:09 +0000 (0:00:00.701) 0:06:04.464 ******* 2026-02-09 03:42:16.743149 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:42:16.743153 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:42:16.743157 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:42:16.743162 | orchestrator | 2026-02-09 03:42:16.743166 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-09 03:42:16.743171 | orchestrator | Monday 09 February 2026 03:42:10 +0000 (0:00:00.975) 0:06:05.440 ******* 2026-02-09 03:42:16.743175 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:42:16.743180 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:42:16.743184 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:42:16.743188 | orchestrator | 2026-02-09 03:42:16.743193 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-09 03:42:16.743197 | orchestrator | Monday 09 February 2026 03:42:10 +0000 (0:00:00.359) 0:06:05.799 ******* 2026-02-09 03:42:16.743201 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:42:16.743206 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:42:16.743210 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:42:16.743215 | orchestrator | 2026-02-09 03:42:16.743219 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-09 03:42:16.743223 | orchestrator | Monday 09 February 2026 03:42:11 +0000 (0:00:00.366) 0:06:06.165 ******* 2026-02-09 03:42:16.743228 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:42:16.743232 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:42:16.743237 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:42:16.743241 | orchestrator | 2026-02-09 03:42:16.743245 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-09 03:42:16.743250 | orchestrator | Monday 09 February 2026 03:42:11 +0000 (0:00:00.334) 0:06:06.500 ******* 2026-02-09 03:42:16.743255 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:42:16.743258 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:42:16.743262 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:42:16.743266 | orchestrator | 2026-02-09 03:42:16.743269 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-09 03:42:16.743273 | orchestrator | Monday 09 February 2026 03:42:12 +0000 (0:00:01.306) 0:06:07.807 ******* 2026-02-09 03:42:16.743277 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:42:16.743280 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:42:16.743284 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:42:16.743288 | orchestrator | 2026-02-09 03:42:16.743291 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-09 03:42:16.743295 | orchestrator | Monday 09 February 2026 03:42:13 +0000 (0:00:00.672) 0:06:08.479 ******* 2026-02-09 03:42:16.743299 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:42:16.743302 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:42:16.743306 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:42:16.743310 | orchestrator | 2026-02-09 03:42:16.743314 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-09 03:42:16.743317 | orchestrator | Monday 09 February 2026 03:42:13 +0000 (0:00:00.340) 0:06:08.820 ******* 2026-02-09 03:42:16.743321 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:42:16.743325 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:42:16.743328 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:42:16.743332 | orchestrator | 2026-02-09 03:42:16.743336 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-09 03:42:16.743339 | orchestrator | Monday 09 February 2026 03:42:14 +0000 (0:00:00.335) 0:06:09.155 ******* 2026-02-09 03:42:16.743343 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:42:16.743347 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:42:16.743353 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:42:16.743357 | orchestrator | 2026-02-09 03:42:16.743363 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-09 03:42:16.743369 | orchestrator | Monday 09 February 2026 03:42:14 +0000 (0:00:00.656) 0:06:09.811 ******* 2026-02-09 03:42:16.743380 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:42:16.743386 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:42:16.743392 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:42:16.743397 | orchestrator | 2026-02-09 03:42:16.743403 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-09 03:42:16.743409 | orchestrator | Monday 09 February 2026 03:42:15 +0000 (0:00:00.391) 0:06:10.203 ******* 2026-02-09 03:42:16.743415 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:42:16.743420 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:42:16.743427 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:42:16.743433 | orchestrator | 2026-02-09 03:42:16.743439 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-09 03:42:16.743446 | orchestrator | Monday 09 February 2026 03:42:15 +0000 (0:00:00.384) 0:06:10.587 ******* 2026-02-09 03:42:16.743452 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:42:16.743459 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:42:16.743465 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:42:16.743471 | orchestrator | 2026-02-09 03:42:16.743477 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-09 03:42:16.743483 | orchestrator | Monday 09 February 2026 03:42:16 +0000 (0:00:00.358) 0:06:10.946 ******* 2026-02-09 03:42:16.743489 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:42:16.743494 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:42:16.743498 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:42:16.743502 | orchestrator | 2026-02-09 03:42:16.743510 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-09 03:43:11.564318 | orchestrator | Monday 09 February 2026 03:42:16 +0000 (0:00:00.621) 0:06:11.568 ******* 2026-02-09 03:43:11.564403 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:11.564415 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:11.564423 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:43:11.564430 | orchestrator | 2026-02-09 03:43:11.564438 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-09 03:43:11.564445 | orchestrator | Monday 09 February 2026 03:42:17 +0000 (0:00:00.372) 0:06:11.941 ******* 2026-02-09 03:43:11.564452 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:43:11.564460 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:43:11.564467 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:43:11.564474 | orchestrator | 2026-02-09 03:43:11.564481 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-09 03:43:11.564488 | orchestrator | Monday 09 February 2026 03:42:17 +0000 (0:00:00.355) 0:06:12.297 ******* 2026-02-09 03:43:11.564495 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:43:11.564501 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:43:11.564508 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:43:11.564515 | orchestrator | 2026-02-09 03:43:11.564521 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-09 03:43:11.564528 | orchestrator | Monday 09 February 2026 03:42:18 +0000 (0:00:00.853) 0:06:13.150 ******* 2026-02-09 03:43:11.564535 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:43:11.564542 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:43:11.564548 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:43:11.564555 | orchestrator | 2026-02-09 03:43:11.564562 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-09 03:43:11.564568 | orchestrator | Monday 09 February 2026 03:42:18 +0000 (0:00:00.376) 0:06:13.527 ******* 2026-02-09 03:43:11.564575 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-09 03:43:11.564583 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 03:43:11.564589 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 03:43:11.564598 | orchestrator | 2026-02-09 03:43:11.564609 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-09 03:43:11.564647 | orchestrator | Monday 09 February 2026 03:42:19 +0000 (0:00:00.691) 0:06:14.219 ******* 2026-02-09 03:43:11.564658 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:43:11.564669 | orchestrator | 2026-02-09 03:43:11.564680 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-09 03:43:11.564689 | orchestrator | Monday 09 February 2026 03:42:20 +0000 (0:00:00.852) 0:06:15.071 ******* 2026-02-09 03:43:11.564699 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:11.564709 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:11.564719 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:43:11.564730 | orchestrator | 2026-02-09 03:43:11.564741 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-09 03:43:11.564751 | orchestrator | Monday 09 February 2026 03:42:20 +0000 (0:00:00.340) 0:06:15.411 ******* 2026-02-09 03:43:11.564762 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:11.564772 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:11.564783 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:43:11.564794 | orchestrator | 2026-02-09 03:43:11.564805 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-09 03:43:11.564816 | orchestrator | Monday 09 February 2026 03:42:20 +0000 (0:00:00.324) 0:06:15.736 ******* 2026-02-09 03:43:11.564853 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:43:11.564864 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:43:11.564876 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:43:11.564887 | orchestrator | 2026-02-09 03:43:11.564900 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-09 03:43:11.564911 | orchestrator | Monday 09 February 2026 03:42:21 +0000 (0:00:00.622) 0:06:16.358 ******* 2026-02-09 03:43:11.564922 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:43:11.564930 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:43:11.564938 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:43:11.564945 | orchestrator | 2026-02-09 03:43:11.564953 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-09 03:43:11.564962 | orchestrator | Monday 09 February 2026 03:42:22 +0000 (0:00:00.684) 0:06:17.043 ******* 2026-02-09 03:43:11.564982 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-09 03:43:11.564992 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-09 03:43:11.564999 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-09 03:43:11.565007 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-09 03:43:11.565019 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-09 03:43:11.565032 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-09 03:43:11.565043 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-09 03:43:11.565054 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-09 03:43:11.565064 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-09 03:43:11.565074 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-09 03:43:11.565085 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-09 03:43:11.565096 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-09 03:43:11.565127 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-09 03:43:11.565138 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-09 03:43:11.565150 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-09 03:43:11.565173 | orchestrator | 2026-02-09 03:43:11.565185 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-09 03:43:11.565197 | orchestrator | Monday 09 February 2026 03:42:24 +0000 (0:00:02.020) 0:06:19.063 ******* 2026-02-09 03:43:11.565208 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:11.565220 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:11.565231 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:43:11.565242 | orchestrator | 2026-02-09 03:43:11.565253 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-09 03:43:11.565263 | orchestrator | Monday 09 February 2026 03:42:24 +0000 (0:00:00.339) 0:06:19.402 ******* 2026-02-09 03:43:11.565275 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:43:11.565286 | orchestrator | 2026-02-09 03:43:11.565296 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-09 03:43:11.565308 | orchestrator | Monday 09 February 2026 03:42:25 +0000 (0:00:00.815) 0:06:20.218 ******* 2026-02-09 03:43:11.565315 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-09 03:43:11.565322 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-09 03:43:11.565328 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-09 03:43:11.565335 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-09 03:43:11.565342 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-09 03:43:11.565349 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-09 03:43:11.565355 | orchestrator | 2026-02-09 03:43:11.565362 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-09 03:43:11.565368 | orchestrator | Monday 09 February 2026 03:42:26 +0000 (0:00:00.932) 0:06:21.151 ******* 2026-02-09 03:43:11.565375 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:43:11.565382 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-09 03:43:11.565388 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-09 03:43:11.565395 | orchestrator | 2026-02-09 03:43:11.565401 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-09 03:43:11.565408 | orchestrator | Monday 09 February 2026 03:42:28 +0000 (0:00:01.927) 0:06:23.078 ******* 2026-02-09 03:43:11.565415 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-09 03:43:11.565421 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-09 03:43:11.565428 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:43:11.565435 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-09 03:43:11.565441 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-09 03:43:11.565448 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:43:11.565454 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-09 03:43:11.565461 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-09 03:43:11.565467 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:43:11.565474 | orchestrator | 2026-02-09 03:43:11.565480 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-09 03:43:11.565487 | orchestrator | Monday 09 February 2026 03:42:29 +0000 (0:00:01.098) 0:06:24.177 ******* 2026-02-09 03:43:11.565493 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-09 03:43:11.565500 | orchestrator | 2026-02-09 03:43:11.565507 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-09 03:43:11.565513 | orchestrator | Monday 09 February 2026 03:42:31 +0000 (0:00:02.046) 0:06:26.224 ******* 2026-02-09 03:43:11.565520 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:43:11.565526 | orchestrator | 2026-02-09 03:43:11.565533 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-09 03:43:11.565546 | orchestrator | Monday 09 February 2026 03:42:32 +0000 (0:00:00.885) 0:06:27.109 ******* 2026-02-09 03:43:11.565560 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}) 2026-02-09 03:43:11.565568 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}) 2026-02-09 03:43:11.565575 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}) 2026-02-09 03:43:11.565581 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}) 2026-02-09 03:43:11.565588 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}) 2026-02-09 03:43:11.565595 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}) 2026-02-09 03:43:11.565601 | orchestrator | 2026-02-09 03:43:11.565608 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-09 03:43:11.565622 | orchestrator | Monday 09 February 2026 03:43:11 +0000 (0:00:39.274) 0:07:06.383 ******* 2026-02-09 03:43:50.179753 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.179899 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:50.179918 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:43:50.179930 | orchestrator | 2026-02-09 03:43:50.179941 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-09 03:43:50.179950 | orchestrator | Monday 09 February 2026 03:43:11 +0000 (0:00:00.317) 0:07:06.701 ******* 2026-02-09 03:43:50.179958 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:43:50.179966 | orchestrator | 2026-02-09 03:43:50.179973 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-09 03:43:50.179980 | orchestrator | Monday 09 February 2026 03:43:12 +0000 (0:00:00.837) 0:07:07.538 ******* 2026-02-09 03:43:50.179987 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:43:50.179994 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:43:50.180001 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:43:50.180007 | orchestrator | 2026-02-09 03:43:50.180014 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-09 03:43:50.180021 | orchestrator | Monday 09 February 2026 03:43:13 +0000 (0:00:00.692) 0:07:08.231 ******* 2026-02-09 03:43:50.180028 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:43:50.180035 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:43:50.180042 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:43:50.180049 | orchestrator | 2026-02-09 03:43:50.180055 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-09 03:43:50.180062 | orchestrator | Monday 09 February 2026 03:43:15 +0000 (0:00:02.423) 0:07:10.654 ******* 2026-02-09 03:43:50.180069 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:43:50.180077 | orchestrator | 2026-02-09 03:43:50.180084 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-09 03:43:50.180090 | orchestrator | Monday 09 February 2026 03:43:16 +0000 (0:00:00.830) 0:07:11.485 ******* 2026-02-09 03:43:50.180097 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:43:50.180104 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:43:50.180110 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:43:50.180117 | orchestrator | 2026-02-09 03:43:50.180123 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-09 03:43:50.180130 | orchestrator | Monday 09 February 2026 03:43:17 +0000 (0:00:01.217) 0:07:12.702 ******* 2026-02-09 03:43:50.180137 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:43:50.180166 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:43:50.180173 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:43:50.180180 | orchestrator | 2026-02-09 03:43:50.180187 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-09 03:43:50.180193 | orchestrator | Monday 09 February 2026 03:43:19 +0000 (0:00:01.157) 0:07:13.860 ******* 2026-02-09 03:43:50.180200 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:43:50.180207 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:43:50.180213 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:43:50.180220 | orchestrator | 2026-02-09 03:43:50.180226 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-09 03:43:50.180233 | orchestrator | Monday 09 February 2026 03:43:21 +0000 (0:00:02.066) 0:07:15.927 ******* 2026-02-09 03:43:50.180240 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.180246 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:50.180253 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:43:50.180259 | orchestrator | 2026-02-09 03:43:50.180266 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-09 03:43:50.180272 | orchestrator | Monday 09 February 2026 03:43:21 +0000 (0:00:00.388) 0:07:16.315 ******* 2026-02-09 03:43:50.180279 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.180288 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:50.180295 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:43:50.180303 | orchestrator | 2026-02-09 03:43:50.180311 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-09 03:43:50.180319 | orchestrator | Monday 09 February 2026 03:43:21 +0000 (0:00:00.357) 0:07:16.672 ******* 2026-02-09 03:43:50.180327 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-09 03:43:50.180334 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-09 03:43:50.180342 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-09 03:43:50.180365 | orchestrator | ok: [testbed-node-3] => (item=1) 2026-02-09 03:43:50.180374 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-09 03:43:50.180383 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-09 03:43:50.180391 | orchestrator | 2026-02-09 03:43:50.180400 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-09 03:43:50.180408 | orchestrator | Monday 09 February 2026 03:43:22 +0000 (0:00:00.986) 0:07:17.659 ******* 2026-02-09 03:43:50.180417 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-09 03:43:50.180425 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-02-09 03:43:50.180434 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-09 03:43:50.180442 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-09 03:43:50.180451 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-09 03:43:50.180459 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-09 03:43:50.180468 | orchestrator | 2026-02-09 03:43:50.180476 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-09 03:43:50.180498 | orchestrator | Monday 09 February 2026 03:43:25 +0000 (0:00:02.481) 0:07:20.140 ******* 2026-02-09 03:43:50.180507 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-09 03:43:50.180515 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-09 03:43:50.180524 | orchestrator | changed: [testbed-node-4] => (item=0) 2026-02-09 03:43:50.180533 | orchestrator | changed: [testbed-node-3] => (item=1) 2026-02-09 03:43:50.180541 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-09 03:43:50.180550 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-09 03:43:50.180558 | orchestrator | 2026-02-09 03:43:50.180566 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-09 03:43:50.180574 | orchestrator | Monday 09 February 2026 03:43:29 +0000 (0:00:04.228) 0:07:24.369 ******* 2026-02-09 03:43:50.180598 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.180606 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:50.180615 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-09 03:43:50.180623 | orchestrator | 2026-02-09 03:43:50.180636 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-09 03:43:50.180645 | orchestrator | Monday 09 February 2026 03:43:31 +0000 (0:00:02.306) 0:07:26.675 ******* 2026-02-09 03:43:50.180653 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.180662 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:50.180670 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-09 03:43:50.180678 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-09 03:43:50.180685 | orchestrator | 2026-02-09 03:43:50.180692 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-09 03:43:50.180700 | orchestrator | Monday 09 February 2026 03:43:44 +0000 (0:00:12.316) 0:07:38.992 ******* 2026-02-09 03:43:50.180707 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.180714 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:50.180721 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:43:50.180729 | orchestrator | 2026-02-09 03:43:50.180736 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-09 03:43:50.180832 | orchestrator | Monday 09 February 2026 03:43:45 +0000 (0:00:01.247) 0:07:40.239 ******* 2026-02-09 03:43:50.180840 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.180847 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:50.180855 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:43:50.180862 | orchestrator | 2026-02-09 03:43:50.180869 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-09 03:43:50.180876 | orchestrator | Monday 09 February 2026 03:43:45 +0000 (0:00:00.359) 0:07:40.598 ******* 2026-02-09 03:43:50.180883 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:43:50.180891 | orchestrator | 2026-02-09 03:43:50.180937 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-09 03:43:50.181001 | orchestrator | Monday 09 February 2026 03:43:46 +0000 (0:00:00.851) 0:07:41.450 ******* 2026-02-09 03:43:50.181073 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:43:50.181081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:43:50.181089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:43:50.181096 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.181103 | orchestrator | 2026-02-09 03:43:50.181110 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-09 03:43:50.181118 | orchestrator | Monday 09 February 2026 03:43:47 +0000 (0:00:00.426) 0:07:41.876 ******* 2026-02-09 03:43:50.181125 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.181133 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:50.181145 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:43:50.181157 | orchestrator | 2026-02-09 03:43:50.181167 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-09 03:43:50.181176 | orchestrator | Monday 09 February 2026 03:43:47 +0000 (0:00:00.350) 0:07:42.226 ******* 2026-02-09 03:43:50.181186 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.181204 | orchestrator | 2026-02-09 03:43:50.181218 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-09 03:43:50.181229 | orchestrator | Monday 09 February 2026 03:43:47 +0000 (0:00:00.245) 0:07:42.472 ******* 2026-02-09 03:43:50.181240 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.181252 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:43:50.181263 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:43:50.181274 | orchestrator | 2026-02-09 03:43:50.181285 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-09 03:43:50.181296 | orchestrator | Monday 09 February 2026 03:43:48 +0000 (0:00:00.637) 0:07:43.110 ******* 2026-02-09 03:43:50.181308 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.181319 | orchestrator | 2026-02-09 03:43:50.181331 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-09 03:43:50.181355 | orchestrator | Monday 09 February 2026 03:43:48 +0000 (0:00:00.253) 0:07:43.363 ******* 2026-02-09 03:43:50.181374 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.181387 | orchestrator | 2026-02-09 03:43:50.181397 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-09 03:43:50.181404 | orchestrator | Monday 09 February 2026 03:43:48 +0000 (0:00:00.274) 0:07:43.638 ******* 2026-02-09 03:43:50.181411 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.181418 | orchestrator | 2026-02-09 03:43:50.181425 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-09 03:43:50.181523 | orchestrator | Monday 09 February 2026 03:43:48 +0000 (0:00:00.152) 0:07:43.790 ******* 2026-02-09 03:43:50.181532 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.181540 | orchestrator | 2026-02-09 03:43:50.181547 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-09 03:43:50.181554 | orchestrator | Monday 09 February 2026 03:43:49 +0000 (0:00:00.302) 0:07:44.092 ******* 2026-02-09 03:43:50.181561 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.181569 | orchestrator | 2026-02-09 03:43:50.181576 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-09 03:43:50.181583 | orchestrator | Monday 09 February 2026 03:43:49 +0000 (0:00:00.245) 0:07:44.337 ******* 2026-02-09 03:43:50.181590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:43:50.181598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:43:50.181605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:43:50.181612 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:43:50.181619 | orchestrator | 2026-02-09 03:43:50.181626 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-09 03:43:50.181633 | orchestrator | Monday 09 February 2026 03:43:49 +0000 (0:00:00.474) 0:07:44.811 ******* 2026-02-09 03:43:50.181652 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.683673 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:11.683766 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:11.683778 | orchestrator | 2026-02-09 03:44:11.683787 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-09 03:44:11.683796 | orchestrator | Monday 09 February 2026 03:43:50 +0000 (0:00:00.344) 0:07:45.156 ******* 2026-02-09 03:44:11.683805 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.683860 | orchestrator | 2026-02-09 03:44:11.683870 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-09 03:44:11.683878 | orchestrator | Monday 09 February 2026 03:43:50 +0000 (0:00:00.259) 0:07:45.415 ******* 2026-02-09 03:44:11.683885 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.683893 | orchestrator | 2026-02-09 03:44:11.683900 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-09 03:44:11.683907 | orchestrator | 2026-02-09 03:44:11.683915 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-09 03:44:11.683922 | orchestrator | Monday 09 February 2026 03:43:51 +0000 (0:00:01.380) 0:07:46.795 ******* 2026-02-09 03:44:11.683931 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:44:11.683939 | orchestrator | 2026-02-09 03:44:11.683947 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-09 03:44:11.683954 | orchestrator | Monday 09 February 2026 03:43:53 +0000 (0:00:01.384) 0:07:48.180 ******* 2026-02-09 03:44:11.683962 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:44:11.683970 | orchestrator | 2026-02-09 03:44:11.683977 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-09 03:44:11.684008 | orchestrator | Monday 09 February 2026 03:43:54 +0000 (0:00:01.340) 0:07:49.521 ******* 2026-02-09 03:44:11.684016 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.684024 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:11.684031 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:11.684038 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:44:11.684046 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:44:11.684053 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:44:11.684060 | orchestrator | 2026-02-09 03:44:11.684068 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-09 03:44:11.684075 | orchestrator | Monday 09 February 2026 03:43:56 +0000 (0:00:01.361) 0:07:50.882 ******* 2026-02-09 03:44:11.684082 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:11.684089 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:44:11.684097 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:44:11.684104 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:11.684111 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:44:11.684118 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:11.684125 | orchestrator | 2026-02-09 03:44:11.684133 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-09 03:44:11.684140 | orchestrator | Monday 09 February 2026 03:43:56 +0000 (0:00:00.741) 0:07:51.623 ******* 2026-02-09 03:44:11.684147 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:44:11.684154 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:11.684161 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:44:11.684168 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:11.684175 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:11.684182 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:44:11.684190 | orchestrator | 2026-02-09 03:44:11.684197 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-09 03:44:11.684204 | orchestrator | Monday 09 February 2026 03:43:57 +0000 (0:00:00.957) 0:07:52.580 ******* 2026-02-09 03:44:11.684211 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:11.684218 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:44:11.684226 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:44:11.684243 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:11.684251 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:44:11.684267 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:11.684274 | orchestrator | 2026-02-09 03:44:11.684281 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-09 03:44:11.684301 | orchestrator | Monday 09 February 2026 03:43:58 +0000 (0:00:00.783) 0:07:53.364 ******* 2026-02-09 03:44:11.684308 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.684316 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:11.684323 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:11.684330 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:44:11.684337 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:44:11.684344 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:44:11.684352 | orchestrator | 2026-02-09 03:44:11.684359 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-09 03:44:11.684366 | orchestrator | Monday 09 February 2026 03:43:59 +0000 (0:00:01.322) 0:07:54.686 ******* 2026-02-09 03:44:11.684373 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.684380 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:11.684387 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:11.684395 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:44:11.684402 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:44:11.684410 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:44:11.684417 | orchestrator | 2026-02-09 03:44:11.684425 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-09 03:44:11.684432 | orchestrator | Monday 09 February 2026 03:44:00 +0000 (0:00:00.753) 0:07:55.440 ******* 2026-02-09 03:44:11.684439 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.684446 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:11.684453 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:11.684471 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:44:11.684478 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:44:11.684485 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:44:11.684492 | orchestrator | 2026-02-09 03:44:11.684499 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-09 03:44:11.684507 | orchestrator | Monday 09 February 2026 03:44:01 +0000 (0:00:00.900) 0:07:56.340 ******* 2026-02-09 03:44:11.684514 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:11.684536 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:11.684544 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:11.684551 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:44:11.684558 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:44:11.684565 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:44:11.684572 | orchestrator | 2026-02-09 03:44:11.684580 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-09 03:44:11.684587 | orchestrator | Monday 09 February 2026 03:44:02 +0000 (0:00:01.171) 0:07:57.512 ******* 2026-02-09 03:44:11.684594 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:11.684601 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:11.684608 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:11.684615 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:44:11.684622 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:44:11.684629 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:44:11.684637 | orchestrator | 2026-02-09 03:44:11.684644 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-09 03:44:11.684651 | orchestrator | Monday 09 February 2026 03:44:04 +0000 (0:00:01.409) 0:07:58.922 ******* 2026-02-09 03:44:11.684658 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.684666 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:11.684673 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:11.684680 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:44:11.684687 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:44:11.684694 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:44:11.684701 | orchestrator | 2026-02-09 03:44:11.684708 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-09 03:44:11.684716 | orchestrator | Monday 09 February 2026 03:44:04 +0000 (0:00:00.636) 0:07:59.559 ******* 2026-02-09 03:44:11.684723 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.684730 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:11.684737 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:11.684744 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:44:11.684752 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:44:11.684759 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:44:11.684766 | orchestrator | 2026-02-09 03:44:11.684774 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-09 03:44:11.684781 | orchestrator | Monday 09 February 2026 03:44:05 +0000 (0:00:00.946) 0:08:00.506 ******* 2026-02-09 03:44:11.684788 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:11.684795 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:11.684802 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:11.684809 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:44:11.684834 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:44:11.684842 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:44:11.684849 | orchestrator | 2026-02-09 03:44:11.684856 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-09 03:44:11.684864 | orchestrator | Monday 09 February 2026 03:44:06 +0000 (0:00:00.705) 0:08:01.211 ******* 2026-02-09 03:44:11.684871 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:11.684878 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:11.684885 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:11.684892 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:44:11.684899 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:44:11.684907 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:44:11.684914 | orchestrator | 2026-02-09 03:44:11.684921 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-09 03:44:11.684934 | orchestrator | Monday 09 February 2026 03:44:07 +0000 (0:00:00.914) 0:08:02.125 ******* 2026-02-09 03:44:11.684941 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:11.684949 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:11.684956 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:11.684963 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:44:11.684970 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:44:11.684977 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:44:11.684984 | orchestrator | 2026-02-09 03:44:11.684991 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-09 03:44:11.684999 | orchestrator | Monday 09 February 2026 03:44:07 +0000 (0:00:00.686) 0:08:02.812 ******* 2026-02-09 03:44:11.685006 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.685013 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:11.685020 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:11.685027 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:44:11.685034 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:44:11.685041 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:44:11.685048 | orchestrator | 2026-02-09 03:44:11.685055 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-09 03:44:11.685063 | orchestrator | Monday 09 February 2026 03:44:08 +0000 (0:00:00.889) 0:08:03.701 ******* 2026-02-09 03:44:11.685070 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.685077 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:11.685085 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:11.685092 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:44:11.685099 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:44:11.685106 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:44:11.685113 | orchestrator | 2026-02-09 03:44:11.685121 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-09 03:44:11.685128 | orchestrator | Monday 09 February 2026 03:44:09 +0000 (0:00:00.636) 0:08:04.337 ******* 2026-02-09 03:44:11.685135 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:11.685142 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:11.685149 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:11.685156 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:44:11.685163 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:44:11.685170 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:44:11.685177 | orchestrator | 2026-02-09 03:44:11.685185 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-09 03:44:11.685192 | orchestrator | Monday 09 February 2026 03:44:10 +0000 (0:00:00.975) 0:08:05.313 ******* 2026-02-09 03:44:11.685199 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:11.685206 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:11.685213 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:11.685220 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:44:11.685227 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:44:11.685234 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:44:11.685241 | orchestrator | 2026-02-09 03:44:11.685248 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-09 03:44:11.685255 | orchestrator | Monday 09 February 2026 03:44:11 +0000 (0:00:00.728) 0:08:06.042 ******* 2026-02-09 03:44:11.685268 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:42.495592 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:42.495714 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:42.495725 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:44:42.495732 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:44:42.495740 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:44:42.495750 | orchestrator | 2026-02-09 03:44:42.495762 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-09 03:44:42.495779 | orchestrator | Monday 09 February 2026 03:44:12 +0000 (0:00:01.397) 0:08:07.439 ******* 2026-02-09 03:44:42.495791 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-09 03:44:42.495801 | orchestrator | 2026-02-09 03:44:42.495884 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-09 03:44:42.495897 | orchestrator | Monday 09 February 2026 03:44:16 +0000 (0:00:03.787) 0:08:11.226 ******* 2026-02-09 03:44:42.495907 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-09 03:44:42.495913 | orchestrator | 2026-02-09 03:44:42.495919 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-09 03:44:42.495926 | orchestrator | Monday 09 February 2026 03:44:18 +0000 (0:00:02.468) 0:08:13.695 ******* 2026-02-09 03:44:42.495932 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:44:42.495938 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:44:42.495944 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:44:42.495954 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:44:42.495965 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:44:42.495974 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:44:42.495984 | orchestrator | 2026-02-09 03:44:42.495993 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-09 03:44:42.496003 | orchestrator | Monday 09 February 2026 03:44:20 +0000 (0:00:01.518) 0:08:15.213 ******* 2026-02-09 03:44:42.496012 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:44:42.496021 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:44:42.496031 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:44:42.496040 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:44:42.496049 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:44:42.496060 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:44:42.496070 | orchestrator | 2026-02-09 03:44:42.496080 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-09 03:44:42.496090 | orchestrator | Monday 09 February 2026 03:44:21 +0000 (0:00:01.258) 0:08:16.472 ******* 2026-02-09 03:44:42.496103 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:44:42.496115 | orchestrator | 2026-02-09 03:44:42.496122 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-09 03:44:42.496128 | orchestrator | Monday 09 February 2026 03:44:23 +0000 (0:00:01.379) 0:08:17.851 ******* 2026-02-09 03:44:42.496134 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:44:42.496140 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:44:42.496146 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:44:42.496152 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:44:42.496158 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:44:42.496164 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:44:42.496170 | orchestrator | 2026-02-09 03:44:42.496176 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-09 03:44:42.496183 | orchestrator | Monday 09 February 2026 03:44:24 +0000 (0:00:01.632) 0:08:19.483 ******* 2026-02-09 03:44:42.496189 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:44:42.496195 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:44:42.496201 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:44:42.496206 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:44:42.496212 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:44:42.496218 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:44:42.496224 | orchestrator | 2026-02-09 03:44:42.496231 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-09 03:44:42.496237 | orchestrator | Monday 09 February 2026 03:44:28 +0000 (0:00:03.784) 0:08:23.268 ******* 2026-02-09 03:44:42.496243 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:44:42.496250 | orchestrator | 2026-02-09 03:44:42.496262 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-09 03:44:42.496268 | orchestrator | Monday 09 February 2026 03:44:29 +0000 (0:00:01.371) 0:08:24.640 ******* 2026-02-09 03:44:42.496274 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:42.496287 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:42.496293 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:42.496299 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:44:42.496305 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:44:42.496311 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:44:42.496317 | orchestrator | 2026-02-09 03:44:42.496323 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-09 03:44:42.496329 | orchestrator | Monday 09 February 2026 03:44:30 +0000 (0:00:00.573) 0:08:25.213 ******* 2026-02-09 03:44:42.496339 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:44:42.496349 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:44:42.496360 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:44:42.496370 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:44:42.496380 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:44:42.496390 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:44:42.496399 | orchestrator | 2026-02-09 03:44:42.496408 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-09 03:44:42.496419 | orchestrator | Monday 09 February 2026 03:44:33 +0000 (0:00:03.255) 0:08:28.469 ******* 2026-02-09 03:44:42.496428 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:42.496437 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:42.496446 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:42.496456 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:44:42.496465 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:44:42.496474 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:44:42.496485 | orchestrator | 2026-02-09 03:44:42.496495 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-09 03:44:42.496506 | orchestrator | 2026-02-09 03:44:42.496516 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-09 03:44:42.496548 | orchestrator | Monday 09 February 2026 03:44:34 +0000 (0:00:00.806) 0:08:29.275 ******* 2026-02-09 03:44:42.496561 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:44:42.496571 | orchestrator | 2026-02-09 03:44:42.496581 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-09 03:44:42.496592 | orchestrator | Monday 09 February 2026 03:44:35 +0000 (0:00:00.691) 0:08:29.967 ******* 2026-02-09 03:44:42.496603 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:44:42.496613 | orchestrator | 2026-02-09 03:44:42.496623 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-09 03:44:42.496634 | orchestrator | Monday 09 February 2026 03:44:35 +0000 (0:00:00.510) 0:08:30.478 ******* 2026-02-09 03:44:42.496644 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:42.496654 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:42.496665 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:42.496671 | orchestrator | 2026-02-09 03:44:42.496677 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-09 03:44:42.496683 | orchestrator | Monday 09 February 2026 03:44:36 +0000 (0:00:00.464) 0:08:30.942 ******* 2026-02-09 03:44:42.496689 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:42.496695 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:42.496702 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:42.496708 | orchestrator | 2026-02-09 03:44:42.496715 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-09 03:44:42.496725 | orchestrator | Monday 09 February 2026 03:44:36 +0000 (0:00:00.652) 0:08:31.595 ******* 2026-02-09 03:44:42.496735 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:42.496742 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:42.496754 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:42.496765 | orchestrator | 2026-02-09 03:44:42.496771 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-09 03:44:42.496777 | orchestrator | Monday 09 February 2026 03:44:37 +0000 (0:00:00.671) 0:08:32.266 ******* 2026-02-09 03:44:42.496794 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:42.496801 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:42.496807 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:42.496859 | orchestrator | 2026-02-09 03:44:42.496866 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-09 03:44:42.496872 | orchestrator | Monday 09 February 2026 03:44:38 +0000 (0:00:01.033) 0:08:33.300 ******* 2026-02-09 03:44:42.496878 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:42.496884 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:42.496890 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:42.496897 | orchestrator | 2026-02-09 03:44:42.496903 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-09 03:44:42.496909 | orchestrator | Monday 09 February 2026 03:44:38 +0000 (0:00:00.337) 0:08:33.637 ******* 2026-02-09 03:44:42.496919 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:42.496930 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:42.496940 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:42.496950 | orchestrator | 2026-02-09 03:44:42.496961 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-09 03:44:42.496971 | orchestrator | Monday 09 February 2026 03:44:39 +0000 (0:00:00.302) 0:08:33.940 ******* 2026-02-09 03:44:42.496982 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:42.496992 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:42.497001 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:42.497012 | orchestrator | 2026-02-09 03:44:42.497023 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-09 03:44:42.497034 | orchestrator | Monday 09 February 2026 03:44:39 +0000 (0:00:00.297) 0:08:34.238 ******* 2026-02-09 03:44:42.497045 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:42.497056 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:42.497064 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:42.497070 | orchestrator | 2026-02-09 03:44:42.497076 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-09 03:44:42.497082 | orchestrator | Monday 09 February 2026 03:44:40 +0000 (0:00:00.904) 0:08:35.142 ******* 2026-02-09 03:44:42.497089 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:42.497096 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:42.497113 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:42.497124 | orchestrator | 2026-02-09 03:44:42.497134 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-09 03:44:42.497144 | orchestrator | Monday 09 February 2026 03:44:41 +0000 (0:00:00.713) 0:08:35.856 ******* 2026-02-09 03:44:42.497154 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:42.497164 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:42.497175 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:42.497185 | orchestrator | 2026-02-09 03:44:42.497196 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-09 03:44:42.497207 | orchestrator | Monday 09 February 2026 03:44:41 +0000 (0:00:00.345) 0:08:36.202 ******* 2026-02-09 03:44:42.497217 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:44:42.497227 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:44:42.497239 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:44:42.497249 | orchestrator | 2026-02-09 03:44:42.497259 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-09 03:44:42.497271 | orchestrator | Monday 09 February 2026 03:44:41 +0000 (0:00:00.299) 0:08:36.502 ******* 2026-02-09 03:44:42.497277 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:42.497283 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:42.497289 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:42.497295 | orchestrator | 2026-02-09 03:44:42.497301 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-09 03:44:42.497307 | orchestrator | Monday 09 February 2026 03:44:42 +0000 (0:00:00.516) 0:08:37.019 ******* 2026-02-09 03:44:42.497313 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:44:42.497319 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:44:42.497332 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:44:42.497338 | orchestrator | 2026-02-09 03:44:42.497344 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-09 03:44:42.497360 | orchestrator | Monday 09 February 2026 03:44:42 +0000 (0:00:00.305) 0:08:37.324 ******* 2026-02-09 03:45:18.123239 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:18.123345 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:18.123358 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:18.123368 | orchestrator | 2026-02-09 03:45:18.123379 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-09 03:45:18.123389 | orchestrator | Monday 09 February 2026 03:44:42 +0000 (0:00:00.343) 0:08:37.668 ******* 2026-02-09 03:45:18.123399 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:18.123408 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:18.123417 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:18.123426 | orchestrator | 2026-02-09 03:45:18.123435 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-09 03:45:18.123445 | orchestrator | Monday 09 February 2026 03:44:43 +0000 (0:00:00.292) 0:08:37.961 ******* 2026-02-09 03:45:18.123453 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:18.123463 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:18.123471 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:18.123480 | orchestrator | 2026-02-09 03:45:18.123489 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-09 03:45:18.123498 | orchestrator | Monday 09 February 2026 03:44:43 +0000 (0:00:00.544) 0:08:38.505 ******* 2026-02-09 03:45:18.123507 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:18.123516 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:18.123524 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:18.123533 | orchestrator | 2026-02-09 03:45:18.123542 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-09 03:45:18.123551 | orchestrator | Monday 09 February 2026 03:44:44 +0000 (0:00:00.447) 0:08:38.953 ******* 2026-02-09 03:45:18.123560 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:18.123568 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:18.123577 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:18.123586 | orchestrator | 2026-02-09 03:45:18.123608 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-09 03:45:18.123617 | orchestrator | Monday 09 February 2026 03:44:44 +0000 (0:00:00.346) 0:08:39.300 ******* 2026-02-09 03:45:18.123626 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:18.123635 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:18.123644 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:18.123652 | orchestrator | 2026-02-09 03:45:18.123661 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-09 03:45:18.123670 | orchestrator | Monday 09 February 2026 03:44:45 +0000 (0:00:00.711) 0:08:40.012 ******* 2026-02-09 03:45:18.123679 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:18.123687 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:18.123697 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-09 03:45:18.123706 | orchestrator | 2026-02-09 03:45:18.123715 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-09 03:45:18.123723 | orchestrator | Monday 09 February 2026 03:44:45 +0000 (0:00:00.387) 0:08:40.400 ******* 2026-02-09 03:45:18.123732 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-09 03:45:18.123741 | orchestrator | 2026-02-09 03:45:18.123752 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-09 03:45:18.123762 | orchestrator | Monday 09 February 2026 03:44:47 +0000 (0:00:01.957) 0:08:42.357 ******* 2026-02-09 03:45:18.123773 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-09 03:45:18.123838 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:18.123855 | orchestrator | 2026-02-09 03:45:18.123874 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-09 03:45:18.123889 | orchestrator | Monday 09 February 2026 03:44:47 +0000 (0:00:00.237) 0:08:42.594 ******* 2026-02-09 03:45:18.123919 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-09 03:45:18.123938 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-09 03:45:18.123949 | orchestrator | 2026-02-09 03:45:18.123959 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-09 03:45:18.123970 | orchestrator | Monday 09 February 2026 03:44:55 +0000 (0:00:07.819) 0:08:50.414 ******* 2026-02-09 03:45:18.123980 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-09 03:45:18.123989 | orchestrator | 2026-02-09 03:45:18.123998 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-09 03:45:18.124006 | orchestrator | Monday 09 February 2026 03:44:58 +0000 (0:00:03.344) 0:08:53.759 ******* 2026-02-09 03:45:18.124015 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:45:18.124024 | orchestrator | 2026-02-09 03:45:18.124033 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-09 03:45:18.124041 | orchestrator | Monday 09 February 2026 03:44:59 +0000 (0:00:00.702) 0:08:54.461 ******* 2026-02-09 03:45:18.124050 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-09 03:45:18.124058 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-09 03:45:18.124067 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-09 03:45:18.124091 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-09 03:45:18.124100 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-09 03:45:18.124109 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-09 03:45:18.124118 | orchestrator | 2026-02-09 03:45:18.124126 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-09 03:45:18.124135 | orchestrator | Monday 09 February 2026 03:45:00 +0000 (0:00:01.002) 0:08:55.463 ******* 2026-02-09 03:45:18.124143 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:45:18.124151 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-09 03:45:18.124160 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-09 03:45:18.124169 | orchestrator | 2026-02-09 03:45:18.124177 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-09 03:45:18.124186 | orchestrator | Monday 09 February 2026 03:45:02 +0000 (0:00:02.033) 0:08:57.497 ******* 2026-02-09 03:45:18.124194 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-09 03:45:18.124203 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-09 03:45:18.124212 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:45:18.124220 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-09 03:45:18.124229 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-09 03:45:18.124237 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:45:18.124246 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-09 03:45:18.124254 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-09 03:45:18.124263 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:45:18.124271 | orchestrator | 2026-02-09 03:45:18.124289 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-09 03:45:18.124298 | orchestrator | Monday 09 February 2026 03:45:03 +0000 (0:00:01.135) 0:08:58.633 ******* 2026-02-09 03:45:18.124306 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:45:18.124315 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:45:18.124323 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:45:18.124332 | orchestrator | 2026-02-09 03:45:18.124340 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-09 03:45:18.124349 | orchestrator | Monday 09 February 2026 03:45:06 +0000 (0:00:02.997) 0:09:01.630 ******* 2026-02-09 03:45:18.124357 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:18.124366 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:18.124374 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:18.124383 | orchestrator | 2026-02-09 03:45:18.124391 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-09 03:45:18.124400 | orchestrator | Monday 09 February 2026 03:45:07 +0000 (0:00:00.358) 0:09:01.989 ******* 2026-02-09 03:45:18.124408 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:45:18.124417 | orchestrator | 2026-02-09 03:45:18.124426 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-09 03:45:18.124434 | orchestrator | Monday 09 February 2026 03:45:07 +0000 (0:00:00.764) 0:09:02.753 ******* 2026-02-09 03:45:18.124443 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:45:18.124451 | orchestrator | 2026-02-09 03:45:18.124460 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-09 03:45:18.124468 | orchestrator | Monday 09 February 2026 03:45:08 +0000 (0:00:00.530) 0:09:03.283 ******* 2026-02-09 03:45:18.124477 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:45:18.124485 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:45:18.124494 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:45:18.124502 | orchestrator | 2026-02-09 03:45:18.124511 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-09 03:45:18.124519 | orchestrator | Monday 09 February 2026 03:45:09 +0000 (0:00:01.209) 0:09:04.493 ******* 2026-02-09 03:45:18.124528 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:45:18.124536 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:45:18.124550 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:45:18.124558 | orchestrator | 2026-02-09 03:45:18.124567 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-09 03:45:18.124575 | orchestrator | Monday 09 February 2026 03:45:11 +0000 (0:00:01.535) 0:09:06.029 ******* 2026-02-09 03:45:18.124584 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:45:18.124592 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:45:18.124600 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:45:18.124609 | orchestrator | 2026-02-09 03:45:18.124617 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-09 03:45:18.124626 | orchestrator | Monday 09 February 2026 03:45:12 +0000 (0:00:01.800) 0:09:07.829 ******* 2026-02-09 03:45:18.124634 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:45:18.124643 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:45:18.124652 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:45:18.124660 | orchestrator | 2026-02-09 03:45:18.124669 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-09 03:45:18.124677 | orchestrator | Monday 09 February 2026 03:45:14 +0000 (0:00:02.009) 0:09:09.838 ******* 2026-02-09 03:45:18.124686 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:18.124695 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:18.124703 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:18.124712 | orchestrator | 2026-02-09 03:45:18.124720 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-09 03:45:18.124729 | orchestrator | Monday 09 February 2026 03:45:16 +0000 (0:00:01.521) 0:09:11.360 ******* 2026-02-09 03:45:18.124743 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:45:18.124752 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:45:18.124760 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:45:18.124769 | orchestrator | 2026-02-09 03:45:18.124777 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-09 03:45:18.124786 | orchestrator | Monday 09 February 2026 03:45:17 +0000 (0:00:00.704) 0:09:12.064 ******* 2026-02-09 03:45:18.124799 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:45:37.945670 | orchestrator | 2026-02-09 03:45:37.945750 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-09 03:45:37.945759 | orchestrator | Monday 09 February 2026 03:45:18 +0000 (0:00:00.888) 0:09:12.953 ******* 2026-02-09 03:45:37.945765 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.945770 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.945775 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.945780 | orchestrator | 2026-02-09 03:45:37.945785 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-09 03:45:37.945790 | orchestrator | Monday 09 February 2026 03:45:18 +0000 (0:00:00.350) 0:09:13.303 ******* 2026-02-09 03:45:37.945795 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:45:37.945801 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:45:37.945851 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:45:37.945856 | orchestrator | 2026-02-09 03:45:37.945861 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-09 03:45:37.945866 | orchestrator | Monday 09 February 2026 03:45:19 +0000 (0:00:01.269) 0:09:14.572 ******* 2026-02-09 03:45:37.945871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:45:37.945883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:45:37.945889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:45:37.945894 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:37.945898 | orchestrator | 2026-02-09 03:45:37.945903 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-09 03:45:37.945908 | orchestrator | Monday 09 February 2026 03:45:20 +0000 (0:00:00.938) 0:09:15.511 ******* 2026-02-09 03:45:37.945913 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.945918 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.945923 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.945927 | orchestrator | 2026-02-09 03:45:37.945932 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-09 03:45:37.945937 | orchestrator | 2026-02-09 03:45:37.945941 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-09 03:45:37.945946 | orchestrator | Monday 09 February 2026 03:45:21 +0000 (0:00:00.916) 0:09:16.427 ******* 2026-02-09 03:45:37.945951 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:45:37.945957 | orchestrator | 2026-02-09 03:45:37.945962 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-09 03:45:37.945966 | orchestrator | Monday 09 February 2026 03:45:22 +0000 (0:00:00.596) 0:09:17.024 ******* 2026-02-09 03:45:37.945971 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:45:37.945976 | orchestrator | 2026-02-09 03:45:37.945981 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-09 03:45:37.945985 | orchestrator | Monday 09 February 2026 03:45:23 +0000 (0:00:00.844) 0:09:17.869 ******* 2026-02-09 03:45:37.945992 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:37.946000 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:37.946007 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:37.946054 | orchestrator | 2026-02-09 03:45:37.946062 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-09 03:45:37.946092 | orchestrator | Monday 09 February 2026 03:45:23 +0000 (0:00:00.361) 0:09:18.230 ******* 2026-02-09 03:45:37.946100 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.946107 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.946113 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.946120 | orchestrator | 2026-02-09 03:45:37.946127 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-09 03:45:37.946134 | orchestrator | Monday 09 February 2026 03:45:24 +0000 (0:00:00.767) 0:09:18.998 ******* 2026-02-09 03:45:37.946140 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.946147 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.946154 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.946161 | orchestrator | 2026-02-09 03:45:37.946181 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-09 03:45:37.946188 | orchestrator | Monday 09 February 2026 03:45:25 +0000 (0:00:01.006) 0:09:20.004 ******* 2026-02-09 03:45:37.946195 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.946203 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.946211 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.946219 | orchestrator | 2026-02-09 03:45:37.946226 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-09 03:45:37.946235 | orchestrator | Monday 09 February 2026 03:45:25 +0000 (0:00:00.754) 0:09:20.758 ******* 2026-02-09 03:45:37.946243 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:37.946251 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:37.946259 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:37.946267 | orchestrator | 2026-02-09 03:45:37.946272 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-09 03:45:37.946278 | orchestrator | Monday 09 February 2026 03:45:26 +0000 (0:00:00.325) 0:09:21.084 ******* 2026-02-09 03:45:37.946283 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:37.946289 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:37.946294 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:37.946299 | orchestrator | 2026-02-09 03:45:37.946304 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-09 03:45:37.946310 | orchestrator | Monday 09 February 2026 03:45:26 +0000 (0:00:00.346) 0:09:21.431 ******* 2026-02-09 03:45:37.946315 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:37.946320 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:37.946325 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:37.946331 | orchestrator | 2026-02-09 03:45:37.946336 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-09 03:45:37.946341 | orchestrator | Monday 09 February 2026 03:45:27 +0000 (0:00:00.624) 0:09:22.055 ******* 2026-02-09 03:45:37.946347 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.946352 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.946357 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.946362 | orchestrator | 2026-02-09 03:45:37.946367 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-09 03:45:37.946386 | orchestrator | Monday 09 February 2026 03:45:27 +0000 (0:00:00.785) 0:09:22.840 ******* 2026-02-09 03:45:37.946391 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.946395 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.946400 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.946404 | orchestrator | 2026-02-09 03:45:37.946409 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-09 03:45:37.946413 | orchestrator | Monday 09 February 2026 03:45:28 +0000 (0:00:00.794) 0:09:23.634 ******* 2026-02-09 03:45:37.946418 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:37.946422 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:37.946427 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:37.946431 | orchestrator | 2026-02-09 03:45:37.946436 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-09 03:45:37.946440 | orchestrator | Monday 09 February 2026 03:45:29 +0000 (0:00:00.318) 0:09:23.953 ******* 2026-02-09 03:45:37.946451 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:37.946456 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:37.946460 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:37.946465 | orchestrator | 2026-02-09 03:45:37.946469 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-09 03:45:37.946474 | orchestrator | Monday 09 February 2026 03:45:29 +0000 (0:00:00.639) 0:09:24.593 ******* 2026-02-09 03:45:37.946478 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.946483 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.946487 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.946492 | orchestrator | 2026-02-09 03:45:37.946496 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-09 03:45:37.946501 | orchestrator | Monday 09 February 2026 03:45:30 +0000 (0:00:00.419) 0:09:25.012 ******* 2026-02-09 03:45:37.946505 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.946510 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.946514 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.946519 | orchestrator | 2026-02-09 03:45:37.946523 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-09 03:45:37.946528 | orchestrator | Monday 09 February 2026 03:45:30 +0000 (0:00:00.416) 0:09:25.428 ******* 2026-02-09 03:45:37.946532 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.946537 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.946541 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.946546 | orchestrator | 2026-02-09 03:45:37.946550 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-09 03:45:37.946555 | orchestrator | Monday 09 February 2026 03:45:31 +0000 (0:00:00.430) 0:09:25.858 ******* 2026-02-09 03:45:37.946559 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:37.946564 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:37.946568 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:37.946573 | orchestrator | 2026-02-09 03:45:37.946577 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-09 03:45:37.946582 | orchestrator | Monday 09 February 2026 03:45:31 +0000 (0:00:00.647) 0:09:26.506 ******* 2026-02-09 03:45:37.946586 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:37.946591 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:37.946595 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:37.946600 | orchestrator | 2026-02-09 03:45:37.946604 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-09 03:45:37.946609 | orchestrator | Monday 09 February 2026 03:45:32 +0000 (0:00:00.373) 0:09:26.879 ******* 2026-02-09 03:45:37.946613 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:45:37.946618 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:45:37.946622 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:45:37.946627 | orchestrator | 2026-02-09 03:45:37.946631 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-09 03:45:37.946636 | orchestrator | Monday 09 February 2026 03:45:32 +0000 (0:00:00.377) 0:09:27.256 ******* 2026-02-09 03:45:37.946641 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.946645 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.946649 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.946654 | orchestrator | 2026-02-09 03:45:37.946658 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-09 03:45:37.946666 | orchestrator | Monday 09 February 2026 03:45:32 +0000 (0:00:00.375) 0:09:27.632 ******* 2026-02-09 03:45:37.946671 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:45:37.946676 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:45:37.946680 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:45:37.946685 | orchestrator | 2026-02-09 03:45:37.946689 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-09 03:45:37.946694 | orchestrator | Monday 09 February 2026 03:45:33 +0000 (0:00:00.951) 0:09:28.583 ******* 2026-02-09 03:45:37.946699 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:45:37.946708 | orchestrator | 2026-02-09 03:45:37.946712 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-09 03:45:37.946717 | orchestrator | Monday 09 February 2026 03:45:34 +0000 (0:00:00.571) 0:09:29.155 ******* 2026-02-09 03:45:37.946721 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:45:37.946726 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-09 03:45:37.946731 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-09 03:45:37.946735 | orchestrator | 2026-02-09 03:45:37.946740 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-09 03:45:37.946744 | orchestrator | Monday 09 February 2026 03:45:36 +0000 (0:00:02.247) 0:09:31.402 ******* 2026-02-09 03:45:37.946749 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-09 03:45:37.946754 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-09 03:45:37.946758 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:45:37.946763 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-09 03:45:37.946768 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-09 03:45:37.946772 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:45:37.946777 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-09 03:45:37.946781 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-09 03:45:37.946786 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:45:37.946790 | orchestrator | 2026-02-09 03:45:37.946797 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-09 03:46:26.921451 | orchestrator | Monday 09 February 2026 03:45:37 +0000 (0:00:01.368) 0:09:32.771 ******* 2026-02-09 03:46:26.921555 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:26.921568 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:26.921575 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:26.921582 | orchestrator | 2026-02-09 03:46:26.921591 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-09 03:46:26.921599 | orchestrator | Monday 09 February 2026 03:45:38 +0000 (0:00:00.325) 0:09:33.097 ******* 2026-02-09 03:46:26.921607 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:46:26.921615 | orchestrator | 2026-02-09 03:46:26.921622 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-09 03:46:26.921630 | orchestrator | Monday 09 February 2026 03:45:39 +0000 (0:00:00.753) 0:09:33.851 ******* 2026-02-09 03:46:26.921638 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-09 03:46:26.921649 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-09 03:46:26.921657 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-09 03:46:26.921664 | orchestrator | 2026-02-09 03:46:26.921671 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-09 03:46:26.921678 | orchestrator | Monday 09 February 2026 03:45:39 +0000 (0:00:00.804) 0:09:34.656 ******* 2026-02-09 03:46:26.921685 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:46:26.921693 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-09 03:46:26.921700 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:46:26.921707 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-09 03:46:26.921714 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:46:26.921779 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-09 03:46:26.921788 | orchestrator | 2026-02-09 03:46:26.921795 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-09 03:46:26.921841 | orchestrator | Monday 09 February 2026 03:45:43 +0000 (0:00:04.175) 0:09:38.831 ******* 2026-02-09 03:46:26.921849 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:46:26.921856 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-09 03:46:26.921863 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:46:26.921870 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-09 03:46:26.921876 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:46:26.921883 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-09 03:46:26.921890 | orchestrator | 2026-02-09 03:46:26.921910 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-09 03:46:26.921918 | orchestrator | Monday 09 February 2026 03:45:46 +0000 (0:00:02.200) 0:09:41.032 ******* 2026-02-09 03:46:26.921925 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-09 03:46:26.921933 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:46:26.921940 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-09 03:46:26.921946 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:46:26.921953 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-09 03:46:26.921960 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:46:26.921967 | orchestrator | 2026-02-09 03:46:26.921975 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-09 03:46:26.921982 | orchestrator | Monday 09 February 2026 03:45:47 +0000 (0:00:01.381) 0:09:42.413 ******* 2026-02-09 03:46:26.921989 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-09 03:46:26.921996 | orchestrator | 2026-02-09 03:46:26.922004 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-09 03:46:26.922066 | orchestrator | Monday 09 February 2026 03:45:47 +0000 (0:00:00.222) 0:09:42.636 ******* 2026-02-09 03:46:26.922074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 03:46:26.922081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 03:46:26.922086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 03:46:26.922092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 03:46:26.922114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 03:46:26.922119 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:26.922125 | orchestrator | 2026-02-09 03:46:26.922130 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-09 03:46:26.922135 | orchestrator | Monday 09 February 2026 03:45:48 +0000 (0:00:00.549) 0:09:43.185 ******* 2026-02-09 03:46:26.922140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 03:46:26.922145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 03:46:26.922151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 03:46:26.922156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 03:46:26.922171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 03:46:26.922176 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:26.922181 | orchestrator | 2026-02-09 03:46:26.922187 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-09 03:46:26.922192 | orchestrator | Monday 09 February 2026 03:45:48 +0000 (0:00:00.598) 0:09:43.784 ******* 2026-02-09 03:46:26.922197 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-09 03:46:26.922203 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-09 03:46:26.922208 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-09 03:46:26.922213 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-09 03:46:26.922218 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-09 03:46:26.922223 | orchestrator | 2026-02-09 03:46:26.922228 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-09 03:46:26.922233 | orchestrator | Monday 09 February 2026 03:46:17 +0000 (0:00:28.365) 0:10:12.149 ******* 2026-02-09 03:46:26.922238 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:26.922243 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:26.922249 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:26.922254 | orchestrator | 2026-02-09 03:46:26.922259 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-09 03:46:26.922263 | orchestrator | Monday 09 February 2026 03:46:17 +0000 (0:00:00.365) 0:10:12.514 ******* 2026-02-09 03:46:26.922269 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:26.922273 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:26.922279 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:26.922284 | orchestrator | 2026-02-09 03:46:26.922289 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-09 03:46:26.922299 | orchestrator | Monday 09 February 2026 03:46:18 +0000 (0:00:00.331) 0:10:12.846 ******* 2026-02-09 03:46:26.922305 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:46:26.922310 | orchestrator | 2026-02-09 03:46:26.922315 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-09 03:46:26.922319 | orchestrator | Monday 09 February 2026 03:46:18 +0000 (0:00:00.915) 0:10:13.762 ******* 2026-02-09 03:46:26.922323 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:46:26.922328 | orchestrator | 2026-02-09 03:46:26.922332 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-09 03:46:26.922337 | orchestrator | Monday 09 February 2026 03:46:19 +0000 (0:00:00.846) 0:10:14.609 ******* 2026-02-09 03:46:26.922341 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:46:26.922345 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:46:26.922350 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:46:26.922354 | orchestrator | 2026-02-09 03:46:26.922358 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-09 03:46:26.922362 | orchestrator | Monday 09 February 2026 03:46:21 +0000 (0:00:01.361) 0:10:15.970 ******* 2026-02-09 03:46:26.922367 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:46:26.922371 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:46:26.922379 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:46:26.922383 | orchestrator | 2026-02-09 03:46:26.922387 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-09 03:46:26.922392 | orchestrator | Monday 09 February 2026 03:46:22 +0000 (0:00:01.239) 0:10:17.210 ******* 2026-02-09 03:46:26.922396 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:46:26.922400 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:46:26.922405 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:46:26.922409 | orchestrator | 2026-02-09 03:46:26.922413 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-09 03:46:26.922418 | orchestrator | Monday 09 February 2026 03:46:24 +0000 (0:00:01.752) 0:10:18.962 ******* 2026-02-09 03:46:26.922425 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-09 03:46:31.094292 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-09 03:46:31.094365 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-09 03:46:31.094371 | orchestrator | 2026-02-09 03:46:31.094376 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-09 03:46:31.094382 | orchestrator | Monday 09 February 2026 03:46:26 +0000 (0:00:02.783) 0:10:21.746 ******* 2026-02-09 03:46:31.094386 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:31.094391 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:31.094395 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:31.094399 | orchestrator | 2026-02-09 03:46:31.094403 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-09 03:46:31.094407 | orchestrator | Monday 09 February 2026 03:46:27 +0000 (0:00:00.400) 0:10:22.146 ******* 2026-02-09 03:46:31.094412 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:46:31.094416 | orchestrator | 2026-02-09 03:46:31.094422 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-09 03:46:31.094429 | orchestrator | Monday 09 February 2026 03:46:28 +0000 (0:00:00.863) 0:10:23.010 ******* 2026-02-09 03:46:31.094435 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:31.094442 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:46:31.094449 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:46:31.094455 | orchestrator | 2026-02-09 03:46:31.094461 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-09 03:46:31.094468 | orchestrator | Monday 09 February 2026 03:46:28 +0000 (0:00:00.399) 0:10:23.410 ******* 2026-02-09 03:46:31.094474 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:31.094480 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:31.094485 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:31.094491 | orchestrator | 2026-02-09 03:46:31.094497 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-09 03:46:31.094503 | orchestrator | Monday 09 February 2026 03:46:28 +0000 (0:00:00.386) 0:10:23.797 ******* 2026-02-09 03:46:31.094509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:46:31.094515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:46:31.094521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:46:31.094528 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:31.094534 | orchestrator | 2026-02-09 03:46:31.094540 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-09 03:46:31.094546 | orchestrator | Monday 09 February 2026 03:46:29 +0000 (0:00:01.008) 0:10:24.805 ******* 2026-02-09 03:46:31.094554 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:31.094561 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:46:31.094567 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:46:31.094573 | orchestrator | 2026-02-09 03:46:31.094579 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:46:31.094607 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-09 03:46:31.094615 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-09 03:46:31.094635 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-09 03:46:31.094641 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-09 03:46:31.094648 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-09 03:46:31.094654 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-09 03:46:31.094660 | orchestrator | 2026-02-09 03:46:31.094667 | orchestrator | 2026-02-09 03:46:31.094674 | orchestrator | 2026-02-09 03:46:31.094681 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:46:31.094688 | orchestrator | Monday 09 February 2026 03:46:30 +0000 (0:00:00.580) 0:10:25.386 ******* 2026-02-09 03:46:31.094694 | orchestrator | =============================================================================== 2026-02-09 03:46:31.094698 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 55.98s 2026-02-09 03:46:31.094702 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.27s 2026-02-09 03:46:31.094706 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 28.37s 2026-02-09 03:46:31.094710 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.11s 2026-02-09 03:46:31.094714 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.75s 2026-02-09 03:46:31.094718 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.32s 2026-02-09 03:46:31.094722 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.00s 2026-02-09 03:46:31.094725 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.69s 2026-02-09 03:46:31.094729 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.82s 2026-02-09 03:46:31.094745 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.45s 2026-02-09 03:46:31.094749 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.35s 2026-02-09 03:46:31.094753 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.10s 2026-02-09 03:46:31.094757 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.23s 2026-02-09 03:46:31.094761 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.18s 2026-02-09 03:46:31.094765 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.79s 2026-02-09 03:46:31.094768 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.78s 2026-02-09 03:46:31.094773 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.42s 2026-02-09 03:46:31.094777 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.34s 2026-02-09 03:46:31.094781 | orchestrator | ceph-handler : Restart the ceph-crash service --------------------------- 3.26s 2026-02-09 03:46:31.094785 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.03s 2026-02-09 03:46:33.728070 | orchestrator | 2026-02-09 03:46:33 | INFO  | Task 7a1fa69e-5777-4e06-a574-5baa63e04575 (ceph-pools) was prepared for execution. 2026-02-09 03:46:33.728179 | orchestrator | 2026-02-09 03:46:33 | INFO  | It takes a moment until task 7a1fa69e-5777-4e06-a574-5baa63e04575 (ceph-pools) has been started and output is visible here. 2026-02-09 03:46:48.311910 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-09 03:46:48.312018 | orchestrator | 2.16.14 2026-02-09 03:46:48.312034 | orchestrator | 2026-02-09 03:46:48.312045 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-09 03:46:48.312056 | orchestrator | 2026-02-09 03:46:48.312067 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-09 03:46:48.312073 | orchestrator | Monday 09 February 2026 03:46:38 +0000 (0:00:00.650) 0:00:00.650 ******* 2026-02-09 03:46:48.312079 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:46:48.312086 | orchestrator | 2026-02-09 03:46:48.312091 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-09 03:46:48.312097 | orchestrator | Monday 09 February 2026 03:46:39 +0000 (0:00:00.681) 0:00:01.332 ******* 2026-02-09 03:46:48.312103 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:48.312109 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:46:48.312114 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:46:48.312120 | orchestrator | 2026-02-09 03:46:48.312125 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-09 03:46:48.312131 | orchestrator | Monday 09 February 2026 03:46:39 +0000 (0:00:00.607) 0:00:01.939 ******* 2026-02-09 03:46:48.312136 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:48.312142 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:46:48.312147 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:46:48.312152 | orchestrator | 2026-02-09 03:46:48.312158 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-09 03:46:48.312163 | orchestrator | Monday 09 February 2026 03:46:40 +0000 (0:00:00.308) 0:00:02.247 ******* 2026-02-09 03:46:48.312169 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:48.312174 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:46:48.312180 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:46:48.312185 | orchestrator | 2026-02-09 03:46:48.312190 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-09 03:46:48.312209 | orchestrator | Monday 09 February 2026 03:46:40 +0000 (0:00:00.812) 0:00:03.059 ******* 2026-02-09 03:46:48.312215 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:48.312220 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:46:48.312225 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:46:48.312231 | orchestrator | 2026-02-09 03:46:48.312236 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-09 03:46:48.312242 | orchestrator | Monday 09 February 2026 03:46:41 +0000 (0:00:00.310) 0:00:03.370 ******* 2026-02-09 03:46:48.312247 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:48.312252 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:46:48.312258 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:46:48.312263 | orchestrator | 2026-02-09 03:46:48.312268 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-09 03:46:48.312274 | orchestrator | Monday 09 February 2026 03:46:41 +0000 (0:00:00.299) 0:00:03.670 ******* 2026-02-09 03:46:48.312279 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:48.312284 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:46:48.312290 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:46:48.312295 | orchestrator | 2026-02-09 03:46:48.312300 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-09 03:46:48.312306 | orchestrator | Monday 09 February 2026 03:46:41 +0000 (0:00:00.318) 0:00:03.989 ******* 2026-02-09 03:46:48.312312 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:48.312318 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:48.312324 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:48.312329 | orchestrator | 2026-02-09 03:46:48.312334 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-09 03:46:48.312340 | orchestrator | Monday 09 February 2026 03:46:42 +0000 (0:00:00.572) 0:00:04.561 ******* 2026-02-09 03:46:48.312364 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:48.312369 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:46:48.312375 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:46:48.312381 | orchestrator | 2026-02-09 03:46:48.312388 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-09 03:46:48.312394 | orchestrator | Monday 09 February 2026 03:46:42 +0000 (0:00:00.328) 0:00:04.890 ******* 2026-02-09 03:46:48.312401 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-09 03:46:48.312407 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 03:46:48.312413 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 03:46:48.312419 | orchestrator | 2026-02-09 03:46:48.312425 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-09 03:46:48.312431 | orchestrator | Monday 09 February 2026 03:46:43 +0000 (0:00:00.687) 0:00:05.577 ******* 2026-02-09 03:46:48.312438 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:48.312444 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:46:48.312451 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:46:48.312457 | orchestrator | 2026-02-09 03:46:48.312463 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-09 03:46:48.312469 | orchestrator | Monday 09 February 2026 03:46:43 +0000 (0:00:00.461) 0:00:06.039 ******* 2026-02-09 03:46:48.312475 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-09 03:46:48.312482 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 03:46:48.312488 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 03:46:48.312494 | orchestrator | 2026-02-09 03:46:48.312500 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-09 03:46:48.312506 | orchestrator | Monday 09 February 2026 03:46:46 +0000 (0:00:02.236) 0:00:08.276 ******* 2026-02-09 03:46:48.312512 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-09 03:46:48.312520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-09 03:46:48.312526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-09 03:46:48.312532 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:48.312539 | orchestrator | 2026-02-09 03:46:48.312565 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-09 03:46:48.312574 | orchestrator | Monday 09 February 2026 03:46:46 +0000 (0:00:00.691) 0:00:08.968 ******* 2026-02-09 03:46:48.312586 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-09 03:46:48.312598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-09 03:46:48.312605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-09 03:46:48.312612 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:48.312618 | orchestrator | 2026-02-09 03:46:48.312625 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-09 03:46:48.312631 | orchestrator | Monday 09 February 2026 03:46:47 +0000 (0:00:01.128) 0:00:10.096 ******* 2026-02-09 03:46:48.312642 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:48.312657 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:48.312664 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:48.312671 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:48.312677 | orchestrator | 2026-02-09 03:46:48.312684 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-09 03:46:48.312690 | orchestrator | Monday 09 February 2026 03:46:48 +0000 (0:00:00.196) 0:00:10.293 ******* 2026-02-09 03:46:48.312699 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a495b1786f93', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-09 03:46:44.743928', 'end': '2026-02-09 03:46:44.792177', 'delta': '0:00:00.048249', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a495b1786f93'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-09 03:46:48.312708 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ab15bd6989cf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-09 03:46:45.315747', 'end': '2026-02-09 03:46:45.363881', 'delta': '0:00:00.048134', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ab15bd6989cf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-09 03:46:48.312721 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '08d9b4f0b230', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-09 03:46:45.867322', 'end': '2026-02-09 03:46:45.920823', 'delta': '0:00:00.053501', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['08d9b4f0b230'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-09 03:46:55.807163 | orchestrator | 2026-02-09 03:46:55.807298 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-09 03:46:55.807316 | orchestrator | Monday 09 February 2026 03:46:48 +0000 (0:00:00.226) 0:00:10.519 ******* 2026-02-09 03:46:55.807328 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:55.807340 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:46:55.807373 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:46:55.807384 | orchestrator | 2026-02-09 03:46:55.807413 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-09 03:46:55.807425 | orchestrator | Monday 09 February 2026 03:46:48 +0000 (0:00:00.486) 0:00:11.006 ******* 2026-02-09 03:46:55.807437 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-09 03:46:55.807463 | orchestrator | 2026-02-09 03:46:55.807483 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-09 03:46:55.807504 | orchestrator | Monday 09 February 2026 03:46:50 +0000 (0:00:01.663) 0:00:12.670 ******* 2026-02-09 03:46:55.807541 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:55.807563 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:55.807583 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:55.807603 | orchestrator | 2026-02-09 03:46:55.807622 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-09 03:46:55.807642 | orchestrator | Monday 09 February 2026 03:46:50 +0000 (0:00:00.305) 0:00:12.975 ******* 2026-02-09 03:46:55.807653 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:55.807664 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:55.807675 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:55.807688 | orchestrator | 2026-02-09 03:46:55.807700 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 03:46:55.807714 | orchestrator | Monday 09 February 2026 03:46:51 +0000 (0:00:01.010) 0:00:13.986 ******* 2026-02-09 03:46:55.807727 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:55.807739 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:55.807752 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:55.807765 | orchestrator | 2026-02-09 03:46:55.807778 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-09 03:46:55.807791 | orchestrator | Monday 09 February 2026 03:46:52 +0000 (0:00:00.382) 0:00:14.369 ******* 2026-02-09 03:46:55.807836 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:46:55.807857 | orchestrator | 2026-02-09 03:46:55.807876 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-09 03:46:55.807894 | orchestrator | Monday 09 February 2026 03:46:52 +0000 (0:00:00.148) 0:00:14.517 ******* 2026-02-09 03:46:55.807912 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:55.807931 | orchestrator | 2026-02-09 03:46:55.807949 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 03:46:55.807967 | orchestrator | Monday 09 February 2026 03:46:52 +0000 (0:00:00.254) 0:00:14.771 ******* 2026-02-09 03:46:55.807984 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:55.808002 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:55.808021 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:55.808040 | orchestrator | 2026-02-09 03:46:55.808060 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-09 03:46:55.808078 | orchestrator | Monday 09 February 2026 03:46:52 +0000 (0:00:00.312) 0:00:15.084 ******* 2026-02-09 03:46:55.808096 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:55.808114 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:55.808133 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:55.808152 | orchestrator | 2026-02-09 03:46:55.808169 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-09 03:46:55.808188 | orchestrator | Monday 09 February 2026 03:46:53 +0000 (0:00:00.405) 0:00:15.490 ******* 2026-02-09 03:46:55.808200 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:55.808210 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:55.808221 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:55.808231 | orchestrator | 2026-02-09 03:46:55.808242 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-09 03:46:55.808253 | orchestrator | Monday 09 February 2026 03:46:53 +0000 (0:00:00.596) 0:00:16.086 ******* 2026-02-09 03:46:55.808263 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:55.808287 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:55.808298 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:55.808309 | orchestrator | 2026-02-09 03:46:55.808320 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-09 03:46:55.808331 | orchestrator | Monday 09 February 2026 03:46:54 +0000 (0:00:00.394) 0:00:16.481 ******* 2026-02-09 03:46:55.808342 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:55.808352 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:55.808363 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:55.808374 | orchestrator | 2026-02-09 03:46:55.808384 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-09 03:46:55.808396 | orchestrator | Monday 09 February 2026 03:46:54 +0000 (0:00:00.376) 0:00:16.857 ******* 2026-02-09 03:46:55.808414 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:55.808433 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:55.808451 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:55.808469 | orchestrator | 2026-02-09 03:46:55.808488 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-09 03:46:55.808507 | orchestrator | Monday 09 February 2026 03:46:55 +0000 (0:00:00.599) 0:00:17.457 ******* 2026-02-09 03:46:55.808527 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:55.808545 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:55.808564 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:55.808576 | orchestrator | 2026-02-09 03:46:55.808586 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-09 03:46:55.808597 | orchestrator | Monday 09 February 2026 03:46:55 +0000 (0:00:00.328) 0:00:17.785 ******* 2026-02-09 03:46:55.808633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b', 'dm-uuid-LVM-0WjeRAA0lqf3cpEn6bug4xs5UGMazLjB0h01y39wS0A1Owicu3DkC9MW8cY3xQUQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.808659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d', 'dm-uuid-LVM-5Oms0YhgvCVrWp80wJ4aA96yxcElodY708xUFI15dbkcdnHIR6L7mBfIOccNLzlf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.808673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.808687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.808699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.808720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.808731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.808742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.808753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.808775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.902087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3', 'dm-uuid-LVM-28EU5fYWgLFVVTr1j10NPpT02LXZ3m2dqNBTokCpiFfT2ODyZTZ76Gse0HWZzEjm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.902199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:55.902250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UekHwl-BrrL-tQwo-R3UW-N6L4-qGv4-ixmNDb', 'scsi-0QEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d', 'scsi-SQEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:55.902291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9', 'dm-uuid-LVM-3CHn6ZP2pM8HpEDxSzeilwVQRF6lfj6OM8VSybDQwMAeXi61wvDItRKk6IUvThlx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.902319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DXOpal-X33W-ipPf-IHHU-xTym-5svh-1uUmz7', 'scsi-0QEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4', 'scsi-SQEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:55.902337 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.902357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa', 'scsi-SQEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:55.902385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.902403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:55.902420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.902437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:55.902465 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.101299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.101395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.101408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.101443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:56.101473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GwhUsL-bhJV-LTOj-ZPeb-I83T-YRPV-54WlOk', 'scsi-0QEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509', 'scsi-SQEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:56.101488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TEtRPa-KFlO-eA6E-SkhX-jKKT-2BmX-PRBRTw', 'scsi-0QEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c', 'scsi-SQEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:56.101496 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:56.101507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24', 'scsi-SQEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:56.101520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:56.101527 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:56.101535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92', 'dm-uuid-LVM-SZPyknUsbhfLaF3x5K31ctP0vcigu1Pwp97ku36NfSW31vos0Gj86u7MmrIxN6I0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.101545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6', 'dm-uuid-LVM-UtcmtJOb91d0iC1jVKeu7Rh960XYKnyIcb9DX8DrOUkJ6Npc5MMds8BTnO00gFXN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.101552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.101565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.347528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.347709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.347757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.347769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.347781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.347793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-09 03:46:56.347891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:56.347918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jvH3Zw-djyF-WIKe-T88H-f7IR-FEUt-vCkV4E', 'scsi-0QEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717', 'scsi-SQEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:56.347931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nj2fwl-jxqG-fYtS-q2di-jVVW-fVes-RibCJ0', 'scsi-0QEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46', 'scsi-SQEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:56.347943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0', 'scsi-SQEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:56.347955 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-09 03:46:56.347966 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:46:56.347979 | orchestrator | 2026-02-09 03:46:56.347991 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-09 03:46:56.348003 | orchestrator | Monday 09 February 2026 03:46:56 +0000 (0:00:00.663) 0:00:18.448 ******* 2026-02-09 03:46:56.348023 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b', 'dm-uuid-LVM-0WjeRAA0lqf3cpEn6bug4xs5UGMazLjB0h01y39wS0A1Owicu3DkC9MW8cY3xQUQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.482730 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d', 'dm-uuid-LVM-5Oms0YhgvCVrWp80wJ4aA96yxcElodY708xUFI15dbkcdnHIR6L7mBfIOccNLzlf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.482899 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.482923 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.482937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.482949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.482961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.482996 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.483011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.483019 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3', 'dm-uuid-LVM-28EU5fYWgLFVVTr1j10NPpT02LXZ3m2dqNBTokCpiFfT2ODyZTZ76Gse0HWZzEjm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.483027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.483049 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646266 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9', 'dm-uuid-LVM-3CHn6ZP2pM8HpEDxSzeilwVQRF6lfj6OM8VSybDQwMAeXi61wvDItRKk6IUvThlx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UekHwl-BrrL-tQwo-R3UW-N6L4-qGv4-ixmNDb', 'scsi-0QEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d', 'scsi-SQEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DXOpal-X33W-ipPf-IHHU-xTym-5svh-1uUmz7', 'scsi-0QEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4', 'scsi-SQEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646383 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646447 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa', 'scsi-SQEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646465 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646474 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646482 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:46:56.646492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646500 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646530 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.646546 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.745729 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.745888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.745946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92', 'dm-uuid-LVM-SZPyknUsbhfLaF3x5K31ctP0vcigu1Pwp97ku36NfSW31vos0Gj86u7MmrIxN6I0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.745978 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GwhUsL-bhJV-LTOj-ZPeb-I83T-YRPV-54WlOk', 'scsi-0QEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509', 'scsi-SQEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.745991 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6', 'dm-uuid-LVM-UtcmtJOb91d0iC1jVKeu7Rh960XYKnyIcb9DX8DrOUkJ6Npc5MMds8BTnO00gFXN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.746002 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TEtRPa-KFlO-eA6E-SkhX-jKKT-2BmX-PRBRTw', 'scsi-0QEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c', 'scsi-SQEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.746070 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.746087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.746107 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24', 'scsi-SQEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.900535 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.900607 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.900619 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.900642 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:46:56.900649 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.900664 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.900669 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.900686 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.900694 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.900713 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jvH3Zw-djyF-WIKe-T88H-f7IR-FEUt-vCkV4E', 'scsi-0QEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717', 'scsi-SQEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:46:56.900724 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nj2fwl-jxqG-fYtS-q2di-jVVW-fVes-RibCJ0', 'scsi-0QEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46', 'scsi-SQEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:47:07.783480 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0', 'scsi-SQEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:47:07.783619 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-09-02-24-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-09 03:47:07.783707 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:47:07.783731 | orchestrator | 2026-02-09 03:47:07.783749 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-09 03:47:07.783767 | orchestrator | Monday 09 February 2026 03:46:56 +0000 (0:00:00.661) 0:00:19.110 ******* 2026-02-09 03:47:07.783783 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:47:07.783906 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:47:07.783929 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:47:07.783944 | orchestrator | 2026-02-09 03:47:07.783961 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-09 03:47:07.783979 | orchestrator | Monday 09 February 2026 03:46:57 +0000 (0:00:00.891) 0:00:20.001 ******* 2026-02-09 03:47:07.783997 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:47:07.784015 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:47:07.784032 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:47:07.784049 | orchestrator | 2026-02-09 03:47:07.784068 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 03:47:07.784086 | orchestrator | Monday 09 February 2026 03:46:58 +0000 (0:00:00.339) 0:00:20.341 ******* 2026-02-09 03:47:07.784104 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:47:07.784121 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:47:07.784141 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:47:07.784161 | orchestrator | 2026-02-09 03:47:07.784184 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 03:47:07.784201 | orchestrator | Monday 09 February 2026 03:46:58 +0000 (0:00:00.683) 0:00:21.025 ******* 2026-02-09 03:47:07.784237 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:47:07.784257 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:47:07.784274 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:47:07.784290 | orchestrator | 2026-02-09 03:47:07.784308 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 03:47:07.784325 | orchestrator | Monday 09 February 2026 03:46:59 +0000 (0:00:00.308) 0:00:21.333 ******* 2026-02-09 03:47:07.784341 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:47:07.784355 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:47:07.784370 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:47:07.784385 | orchestrator | 2026-02-09 03:47:07.784400 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 03:47:07.784413 | orchestrator | Monday 09 February 2026 03:46:59 +0000 (0:00:00.848) 0:00:22.182 ******* 2026-02-09 03:47:07.784427 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:47:07.784441 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:47:07.784456 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:47:07.784471 | orchestrator | 2026-02-09 03:47:07.784486 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-09 03:47:07.784502 | orchestrator | Monday 09 February 2026 03:47:00 +0000 (0:00:00.346) 0:00:22.528 ******* 2026-02-09 03:47:07.784517 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-09 03:47:07.784533 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-09 03:47:07.784548 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-09 03:47:07.784564 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-09 03:47:07.784580 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-09 03:47:07.784595 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-09 03:47:07.784609 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-09 03:47:07.784624 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-09 03:47:07.784658 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-09 03:47:07.784674 | orchestrator | 2026-02-09 03:47:07.784690 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-09 03:47:07.784705 | orchestrator | Monday 09 February 2026 03:47:01 +0000 (0:00:01.192) 0:00:23.722 ******* 2026-02-09 03:47:07.784751 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-09 03:47:07.784768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-09 03:47:07.784783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-09 03:47:07.784797 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:47:07.784843 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-09 03:47:07.784858 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-09 03:47:07.784873 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-09 03:47:07.784888 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:47:07.784904 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-09 03:47:07.784919 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-09 03:47:07.784935 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-09 03:47:07.784949 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:47:07.784965 | orchestrator | 2026-02-09 03:47:07.784980 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-09 03:47:07.784996 | orchestrator | Monday 09 February 2026 03:47:01 +0000 (0:00:00.404) 0:00:24.126 ******* 2026-02-09 03:47:07.785012 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:47:07.785028 | orchestrator | 2026-02-09 03:47:07.785045 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-09 03:47:07.785062 | orchestrator | Monday 09 February 2026 03:47:02 +0000 (0:00:00.861) 0:00:24.987 ******* 2026-02-09 03:47:07.785078 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:47:07.785092 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:47:07.785108 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:47:07.785124 | orchestrator | 2026-02-09 03:47:07.785141 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-09 03:47:07.785157 | orchestrator | Monday 09 February 2026 03:47:03 +0000 (0:00:00.390) 0:00:25.378 ******* 2026-02-09 03:47:07.785172 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:47:07.785187 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:47:07.785202 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:47:07.785217 | orchestrator | 2026-02-09 03:47:07.785232 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-09 03:47:07.785248 | orchestrator | Monday 09 February 2026 03:47:03 +0000 (0:00:00.324) 0:00:25.702 ******* 2026-02-09 03:47:07.785264 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:47:07.785280 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:47:07.785295 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:47:07.785311 | orchestrator | 2026-02-09 03:47:07.785326 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-09 03:47:07.785342 | orchestrator | Monday 09 February 2026 03:47:04 +0000 (0:00:00.577) 0:00:26.280 ******* 2026-02-09 03:47:07.785357 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:47:07.785372 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:47:07.785386 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:47:07.785401 | orchestrator | 2026-02-09 03:47:07.785416 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-09 03:47:07.785432 | orchestrator | Monday 09 February 2026 03:47:04 +0000 (0:00:00.443) 0:00:26.723 ******* 2026-02-09 03:47:07.785448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:47:07.785462 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:47:07.785478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:47:07.785511 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:47:07.785527 | orchestrator | 2026-02-09 03:47:07.785553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-09 03:47:07.785569 | orchestrator | Monday 09 February 2026 03:47:04 +0000 (0:00:00.406) 0:00:27.130 ******* 2026-02-09 03:47:07.785583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:47:07.785598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:47:07.785613 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:47:07.785629 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:47:07.785644 | orchestrator | 2026-02-09 03:47:07.785658 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-09 03:47:07.785673 | orchestrator | Monday 09 February 2026 03:47:05 +0000 (0:00:00.395) 0:00:27.525 ******* 2026-02-09 03:47:07.785689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 03:47:07.785705 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 03:47:07.785721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 03:47:07.785737 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:47:07.785752 | orchestrator | 2026-02-09 03:47:07.785768 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-09 03:47:07.785784 | orchestrator | Monday 09 February 2026 03:47:05 +0000 (0:00:00.404) 0:00:27.929 ******* 2026-02-09 03:47:07.785825 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:47:07.785843 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:47:07.785858 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:47:07.785873 | orchestrator | 2026-02-09 03:47:07.785889 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-09 03:47:07.785903 | orchestrator | Monday 09 February 2026 03:47:06 +0000 (0:00:00.370) 0:00:28.300 ******* 2026-02-09 03:47:07.785918 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-09 03:47:07.785933 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-09 03:47:07.785948 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-09 03:47:07.785962 | orchestrator | 2026-02-09 03:47:07.785976 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-09 03:47:07.785991 | orchestrator | Monday 09 February 2026 03:47:06 +0000 (0:00:00.830) 0:00:29.131 ******* 2026-02-09 03:47:07.786006 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-09 03:47:07.786144 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 03:48:44.103946 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 03:48:44.104052 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-09 03:48:44.104064 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 03:48:44.104074 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 03:48:44.104084 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 03:48:44.104092 | orchestrator | 2026-02-09 03:48:44.104101 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-09 03:48:44.104111 | orchestrator | Monday 09 February 2026 03:47:07 +0000 (0:00:00.861) 0:00:29.992 ******* 2026-02-09 03:48:44.104120 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-09 03:48:44.104128 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 03:48:44.104137 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 03:48:44.104147 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-09 03:48:44.104155 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 03:48:44.104164 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 03:48:44.104196 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 03:48:44.104205 | orchestrator | 2026-02-09 03:48:44.104213 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-09 03:48:44.104222 | orchestrator | Monday 09 February 2026 03:47:09 +0000 (0:00:01.807) 0:00:31.800 ******* 2026-02-09 03:48:44.104230 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:48:44.104240 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:48:44.104248 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-09 03:48:44.104256 | orchestrator | 2026-02-09 03:48:44.104264 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-09 03:48:44.104272 | orchestrator | Monday 09 February 2026 03:47:09 +0000 (0:00:00.415) 0:00:32.216 ******* 2026-02-09 03:48:44.104294 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-09 03:48:44.104305 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-09 03:48:44.104327 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-09 03:48:44.104336 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-09 03:48:44.104344 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-09 03:48:44.104352 | orchestrator | 2026-02-09 03:48:44.104360 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-09 03:48:44.104368 | orchestrator | Monday 09 February 2026 03:47:53 +0000 (0:00:43.524) 0:01:15.741 ******* 2026-02-09 03:48:44.104376 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104383 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104391 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104400 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104408 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104416 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104424 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-09 03:48:44.104433 | orchestrator | 2026-02-09 03:48:44.104441 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-09 03:48:44.104448 | orchestrator | Monday 09 February 2026 03:48:16 +0000 (0:00:22.547) 0:01:38.288 ******* 2026-02-09 03:48:44.104479 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104489 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104511 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104520 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104528 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104536 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104544 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-09 03:48:44.104552 | orchestrator | 2026-02-09 03:48:44.104560 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-09 03:48:44.104569 | orchestrator | Monday 09 February 2026 03:48:26 +0000 (0:00:10.764) 0:01:49.053 ******* 2026-02-09 03:48:44.104577 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104586 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-09 03:48:44.104594 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-09 03:48:44.104602 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104610 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-09 03:48:44.104618 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-09 03:48:44.104627 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104636 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-09 03:48:44.104644 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-09 03:48:44.104652 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104661 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-09 03:48:44.104668 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-09 03:48:44.104674 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104680 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-09 03:48:44.104686 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-09 03:48:44.104692 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-09 03:48:44.104698 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-09 03:48:44.104703 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-09 03:48:44.104710 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-09 03:48:44.104716 | orchestrator | 2026-02-09 03:48:44.104721 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:48:44.104733 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-09 03:48:44.104742 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-09 03:48:44.104749 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-09 03:48:44.104755 | orchestrator | 2026-02-09 03:48:44.104761 | orchestrator | 2026-02-09 03:48:44.104767 | orchestrator | 2026-02-09 03:48:44.104772 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:48:44.104778 | orchestrator | Monday 09 February 2026 03:48:44 +0000 (0:00:17.229) 0:02:06.283 ******* 2026-02-09 03:48:44.104784 | orchestrator | =============================================================================== 2026-02-09 03:48:44.104790 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.52s 2026-02-09 03:48:44.104823 | orchestrator | generate keys ---------------------------------------------------------- 22.55s 2026-02-09 03:48:44.104832 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.23s 2026-02-09 03:48:44.104839 | orchestrator | get keys from monitors ------------------------------------------------- 10.76s 2026-02-09 03:48:44.104847 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.24s 2026-02-09 03:48:44.104855 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.81s 2026-02-09 03:48:44.104863 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.66s 2026-02-09 03:48:44.104871 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.19s 2026-02-09 03:48:44.104880 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.13s 2026-02-09 03:48:44.104899 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 1.01s 2026-02-09 03:48:44.104916 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.89s 2026-02-09 03:48:44.104925 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.86s 2026-02-09 03:48:44.104933 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.86s 2026-02-09 03:48:44.104951 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.85s 2026-02-09 03:48:44.508713 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.83s 2026-02-09 03:48:44.508839 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.81s 2026-02-09 03:48:44.508854 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.69s 2026-02-09 03:48:44.508862 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.69s 2026-02-09 03:48:44.508870 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-02-09 03:48:44.508879 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.68s 2026-02-09 03:48:47.005463 | orchestrator | 2026-02-09 03:48:47 | INFO  | Task d7b1bf01-b198-435c-8a77-2ac88c640002 (copy-ceph-keys) was prepared for execution. 2026-02-09 03:48:47.005557 | orchestrator | 2026-02-09 03:48:47 | INFO  | It takes a moment until task d7b1bf01-b198-435c-8a77-2ac88c640002 (copy-ceph-keys) has been started and output is visible here. 2026-02-09 03:49:26.103332 | orchestrator | 2026-02-09 03:49:26.103438 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-09 03:49:26.103449 | orchestrator | 2026-02-09 03:49:26.103457 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-09 03:49:26.103463 | orchestrator | Monday 09 February 2026 03:48:51 +0000 (0:00:00.165) 0:00:00.165 ******* 2026-02-09 03:49:26.103469 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-09 03:49:26.103477 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103483 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103489 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-09 03:49:26.103495 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103501 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-09 03:49:26.103507 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-09 03:49:26.103514 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-09 03:49:26.103521 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-09 03:49:26.103548 | orchestrator | 2026-02-09 03:49:26.103555 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-09 03:49:26.103560 | orchestrator | Monday 09 February 2026 03:48:55 +0000 (0:00:04.493) 0:00:04.659 ******* 2026-02-09 03:49:26.103566 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-09 03:49:26.103572 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103591 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103597 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-09 03:49:26.103604 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103610 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-09 03:49:26.103617 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-09 03:49:26.103622 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-09 03:49:26.103628 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-09 03:49:26.103633 | orchestrator | 2026-02-09 03:49:26.103639 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-09 03:49:26.103644 | orchestrator | Monday 09 February 2026 03:49:00 +0000 (0:00:04.186) 0:00:08.845 ******* 2026-02-09 03:49:26.103651 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-09 03:49:26.103657 | orchestrator | 2026-02-09 03:49:26.103662 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-09 03:49:26.103668 | orchestrator | Monday 09 February 2026 03:49:01 +0000 (0:00:00.983) 0:00:09.829 ******* 2026-02-09 03:49:26.103673 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-09 03:49:26.103680 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103686 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103694 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-09 03:49:26.103700 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103706 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-09 03:49:26.103711 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-09 03:49:26.103717 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-09 03:49:26.103723 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-09 03:49:26.103728 | orchestrator | 2026-02-09 03:49:26.103734 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-09 03:49:26.103739 | orchestrator | Monday 09 February 2026 03:49:15 +0000 (0:00:14.255) 0:00:24.084 ******* 2026-02-09 03:49:26.103744 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-09 03:49:26.103750 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-09 03:49:26.103756 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-09 03:49:26.103762 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-09 03:49:26.103785 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-09 03:49:26.103792 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-09 03:49:26.103829 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-09 03:49:26.103836 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-09 03:49:26.103842 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-09 03:49:26.103848 | orchestrator | 2026-02-09 03:49:26.103854 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-09 03:49:26.103860 | orchestrator | Monday 09 February 2026 03:49:18 +0000 (0:00:03.193) 0:00:27.278 ******* 2026-02-09 03:49:26.103867 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-09 03:49:26.103874 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103881 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103888 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-09 03:49:26.103894 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-09 03:49:26.103900 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-09 03:49:26.103906 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-09 03:49:26.103912 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-09 03:49:26.103918 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-09 03:49:26.103924 | orchestrator | 2026-02-09 03:49:26.103930 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:49:26.103937 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:49:26.103944 | orchestrator | 2026-02-09 03:49:26.103950 | orchestrator | 2026-02-09 03:49:26.103963 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:49:26.103970 | orchestrator | Monday 09 February 2026 03:49:25 +0000 (0:00:07.289) 0:00:34.568 ******* 2026-02-09 03:49:26.103977 | orchestrator | =============================================================================== 2026-02-09 03:49:26.103983 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.26s 2026-02-09 03:49:26.103989 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.29s 2026-02-09 03:49:26.103995 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.49s 2026-02-09 03:49:26.104001 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.19s 2026-02-09 03:49:26.104007 | orchestrator | Check if target directories exist --------------------------------------- 3.19s 2026-02-09 03:49:26.104013 | orchestrator | Create share directory -------------------------------------------------- 0.98s 2026-02-09 03:49:38.923003 | orchestrator | 2026-02-09 03:49:38 | INFO  | Task d6537ec4-536d-4ac4-a01e-22b1fe441a3e (cephclient) was prepared for execution. 2026-02-09 03:49:38.923123 | orchestrator | 2026-02-09 03:49:38 | INFO  | It takes a moment until task d6537ec4-536d-4ac4-a01e-22b1fe441a3e (cephclient) has been started and output is visible here. 2026-02-09 03:50:41.064274 | orchestrator | 2026-02-09 03:50:41.064360 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-09 03:50:41.064368 | orchestrator | 2026-02-09 03:50:41.064374 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-09 03:50:41.064379 | orchestrator | Monday 09 February 2026 03:49:43 +0000 (0:00:00.271) 0:00:00.272 ******* 2026-02-09 03:50:41.064384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-09 03:50:41.064391 | orchestrator | 2026-02-09 03:50:41.064396 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-09 03:50:41.064442 | orchestrator | Monday 09 February 2026 03:49:43 +0000 (0:00:00.262) 0:00:00.534 ******* 2026-02-09 03:50:41.064449 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-09 03:50:41.064454 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-09 03:50:41.064459 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-09 03:50:41.064464 | orchestrator | 2026-02-09 03:50:41.064469 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-09 03:50:41.064474 | orchestrator | Monday 09 February 2026 03:49:45 +0000 (0:00:01.317) 0:00:01.852 ******* 2026-02-09 03:50:41.064479 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-09 03:50:41.064484 | orchestrator | 2026-02-09 03:50:41.064489 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-09 03:50:41.064494 | orchestrator | Monday 09 February 2026 03:49:46 +0000 (0:00:01.481) 0:00:03.334 ******* 2026-02-09 03:50:41.064499 | orchestrator | changed: [testbed-manager] 2026-02-09 03:50:41.064503 | orchestrator | 2026-02-09 03:50:41.064508 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-09 03:50:41.064513 | orchestrator | Monday 09 February 2026 03:49:47 +0000 (0:00:00.964) 0:00:04.298 ******* 2026-02-09 03:50:41.064518 | orchestrator | changed: [testbed-manager] 2026-02-09 03:50:41.064523 | orchestrator | 2026-02-09 03:50:41.064527 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-09 03:50:41.064532 | orchestrator | Monday 09 February 2026 03:49:48 +0000 (0:00:00.953) 0:00:05.251 ******* 2026-02-09 03:50:41.064537 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-09 03:50:41.064542 | orchestrator | ok: [testbed-manager] 2026-02-09 03:50:41.064547 | orchestrator | 2026-02-09 03:50:41.064551 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-09 03:50:41.064556 | orchestrator | Monday 09 February 2026 03:50:29 +0000 (0:00:41.467) 0:00:46.718 ******* 2026-02-09 03:50:41.064561 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-09 03:50:41.064566 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-09 03:50:41.064570 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-09 03:50:41.064575 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-09 03:50:41.064580 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-09 03:50:41.064598 | orchestrator | 2026-02-09 03:50:41.064610 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-09 03:50:41.064615 | orchestrator | Monday 09 February 2026 03:50:34 +0000 (0:00:04.618) 0:00:51.337 ******* 2026-02-09 03:50:41.064620 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-09 03:50:41.064625 | orchestrator | 2026-02-09 03:50:41.064630 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-09 03:50:41.064634 | orchestrator | Monday 09 February 2026 03:50:35 +0000 (0:00:00.510) 0:00:51.847 ******* 2026-02-09 03:50:41.064639 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:50:41.064644 | orchestrator | 2026-02-09 03:50:41.064649 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-09 03:50:41.064654 | orchestrator | Monday 09 February 2026 03:50:35 +0000 (0:00:00.144) 0:00:51.992 ******* 2026-02-09 03:50:41.064658 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:50:41.064663 | orchestrator | 2026-02-09 03:50:41.064668 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-09 03:50:41.064672 | orchestrator | Monday 09 February 2026 03:50:35 +0000 (0:00:00.608) 0:00:52.600 ******* 2026-02-09 03:50:41.064677 | orchestrator | changed: [testbed-manager] 2026-02-09 03:50:41.064682 | orchestrator | 2026-02-09 03:50:41.064687 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-09 03:50:41.064702 | orchestrator | Monday 09 February 2026 03:50:37 +0000 (0:00:01.703) 0:00:54.304 ******* 2026-02-09 03:50:41.064707 | orchestrator | changed: [testbed-manager] 2026-02-09 03:50:41.064719 | orchestrator | 2026-02-09 03:50:41.064724 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-09 03:50:41.064729 | orchestrator | Monday 09 February 2026 03:50:38 +0000 (0:00:00.840) 0:00:55.144 ******* 2026-02-09 03:50:41.064734 | orchestrator | changed: [testbed-manager] 2026-02-09 03:50:41.064738 | orchestrator | 2026-02-09 03:50:41.064743 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-09 03:50:41.064748 | orchestrator | Monday 09 February 2026 03:50:39 +0000 (0:00:00.765) 0:00:55.910 ******* 2026-02-09 03:50:41.064753 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-09 03:50:41.064758 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-09 03:50:41.064763 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-09 03:50:41.064768 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-09 03:50:41.064773 | orchestrator | 2026-02-09 03:50:41.064778 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:50:41.064783 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 03:50:41.064789 | orchestrator | 2026-02-09 03:50:41.064794 | orchestrator | 2026-02-09 03:50:41.064833 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:50:41.064840 | orchestrator | Monday 09 February 2026 03:50:40 +0000 (0:00:01.573) 0:00:57.483 ******* 2026-02-09 03:50:41.064846 | orchestrator | =============================================================================== 2026-02-09 03:50:41.064851 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.47s 2026-02-09 03:50:41.064856 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.62s 2026-02-09 03:50:41.064862 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.70s 2026-02-09 03:50:41.064867 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.57s 2026-02-09 03:50:41.064873 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.48s 2026-02-09 03:50:41.064878 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.32s 2026-02-09 03:50:41.064884 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.96s 2026-02-09 03:50:41.064889 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.95s 2026-02-09 03:50:41.064895 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.84s 2026-02-09 03:50:41.064900 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.77s 2026-02-09 03:50:41.064906 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.61s 2026-02-09 03:50:41.064912 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.51s 2026-02-09 03:50:41.064917 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.26s 2026-02-09 03:50:41.064923 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-02-09 03:50:43.653861 | orchestrator | 2026-02-09 03:50:43 | INFO  | Task 85034a64-1a34-446b-8e06-2c735ca55fc5 (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-09 03:50:43.654095 | orchestrator | 2026-02-09 03:50:43 | INFO  | It takes a moment until task 85034a64-1a34-446b-8e06-2c735ca55fc5 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-09 03:52:03.876466 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-09 03:52:03.876564 | orchestrator | 2.16.14 2026-02-09 03:52:03.876579 | orchestrator | 2026-02-09 03:52:03.876591 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-09 03:52:03.876601 | orchestrator | 2026-02-09 03:52:03.876610 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-09 03:52:03.876619 | orchestrator | Monday 09 February 2026 03:50:48 +0000 (0:00:00.289) 0:00:00.289 ******* 2026-02-09 03:52:03.876655 | orchestrator | changed: [testbed-manager] 2026-02-09 03:52:03.876666 | orchestrator | 2026-02-09 03:52:03.876674 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-09 03:52:03.876684 | orchestrator | Monday 09 February 2026 03:50:50 +0000 (0:00:02.273) 0:00:02.562 ******* 2026-02-09 03:52:03.876692 | orchestrator | changed: [testbed-manager] 2026-02-09 03:52:03.876701 | orchestrator | 2026-02-09 03:52:03.876710 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-09 03:52:03.876718 | orchestrator | Monday 09 February 2026 03:50:51 +0000 (0:00:01.103) 0:00:03.666 ******* 2026-02-09 03:52:03.876727 | orchestrator | changed: [testbed-manager] 2026-02-09 03:52:03.876735 | orchestrator | 2026-02-09 03:52:03.876744 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-09 03:52:03.876753 | orchestrator | Monday 09 February 2026 03:50:52 +0000 (0:00:01.150) 0:00:04.816 ******* 2026-02-09 03:52:03.876762 | orchestrator | changed: [testbed-manager] 2026-02-09 03:52:03.876770 | orchestrator | 2026-02-09 03:52:03.876779 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-09 03:52:03.876787 | orchestrator | Monday 09 February 2026 03:50:54 +0000 (0:00:01.280) 0:00:06.096 ******* 2026-02-09 03:52:03.876796 | orchestrator | changed: [testbed-manager] 2026-02-09 03:52:03.876852 | orchestrator | 2026-02-09 03:52:03.876864 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-09 03:52:03.876872 | orchestrator | Monday 09 February 2026 03:50:55 +0000 (0:00:01.059) 0:00:07.155 ******* 2026-02-09 03:52:03.876881 | orchestrator | changed: [testbed-manager] 2026-02-09 03:52:03.876896 | orchestrator | 2026-02-09 03:52:03.876911 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-09 03:52:03.876944 | orchestrator | Monday 09 February 2026 03:50:56 +0000 (0:00:01.197) 0:00:08.353 ******* 2026-02-09 03:52:03.876959 | orchestrator | changed: [testbed-manager] 2026-02-09 03:52:03.876968 | orchestrator | 2026-02-09 03:52:03.876978 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-09 03:52:03.876989 | orchestrator | Monday 09 February 2026 03:50:58 +0000 (0:00:02.050) 0:00:10.403 ******* 2026-02-09 03:52:03.877000 | orchestrator | changed: [testbed-manager] 2026-02-09 03:52:03.877010 | orchestrator | 2026-02-09 03:52:03.877021 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-09 03:52:03.877031 | orchestrator | Monday 09 February 2026 03:50:59 +0000 (0:00:01.234) 0:00:11.638 ******* 2026-02-09 03:52:03.877042 | orchestrator | changed: [testbed-manager] 2026-02-09 03:52:03.877052 | orchestrator | 2026-02-09 03:52:03.877062 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-09 03:52:03.877073 | orchestrator | Monday 09 February 2026 03:51:38 +0000 (0:00:39.411) 0:00:51.049 ******* 2026-02-09 03:52:03.877083 | orchestrator | skipping: [testbed-manager] 2026-02-09 03:52:03.877093 | orchestrator | 2026-02-09 03:52:03.877104 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-09 03:52:03.877115 | orchestrator | 2026-02-09 03:52:03.877125 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-09 03:52:03.877135 | orchestrator | Monday 09 February 2026 03:51:39 +0000 (0:00:00.183) 0:00:51.233 ******* 2026-02-09 03:52:03.877145 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:52:03.877156 | orchestrator | 2026-02-09 03:52:03.877166 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-09 03:52:03.877177 | orchestrator | 2026-02-09 03:52:03.877187 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-09 03:52:03.877197 | orchestrator | Monday 09 February 2026 03:51:50 +0000 (0:00:11.669) 0:01:02.903 ******* 2026-02-09 03:52:03.877208 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:52:03.877218 | orchestrator | 2026-02-09 03:52:03.877228 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-09 03:52:03.877239 | orchestrator | 2026-02-09 03:52:03.877253 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-09 03:52:03.877280 | orchestrator | Monday 09 February 2026 03:52:02 +0000 (0:00:11.198) 0:01:14.101 ******* 2026-02-09 03:52:03.877297 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:52:03.877313 | orchestrator | 2026-02-09 03:52:03.877324 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:52:03.877336 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 03:52:03.877348 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:52:03.877360 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:52:03.877370 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 03:52:03.877379 | orchestrator | 2026-02-09 03:52:03.877389 | orchestrator | 2026-02-09 03:52:03.877404 | orchestrator | 2026-02-09 03:52:03.877419 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:52:03.877435 | orchestrator | Monday 09 February 2026 03:52:03 +0000 (0:00:01.382) 0:01:15.483 ******* 2026-02-09 03:52:03.877450 | orchestrator | =============================================================================== 2026-02-09 03:52:03.877460 | orchestrator | Create admin user ------------------------------------------------------ 39.41s 2026-02-09 03:52:03.877489 | orchestrator | Restart ceph manager service ------------------------------------------- 24.25s 2026-02-09 03:52:03.877504 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.27s 2026-02-09 03:52:03.877519 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.05s 2026-02-09 03:52:03.877534 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.28s 2026-02-09 03:52:03.877549 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.23s 2026-02-09 03:52:03.877563 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.20s 2026-02-09 03:52:03.877578 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.15s 2026-02-09 03:52:03.877594 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.10s 2026-02-09 03:52:03.877608 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2026-02-09 03:52:03.877624 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-02-09 03:52:04.234001 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-09 03:52:06.373426 | orchestrator | 2026-02-09 03:52:06 | INFO  | Task dfb54f6f-df78-4168-a8d9-f4e5dbe09de8 (keystone) was prepared for execution. 2026-02-09 03:52:06.373790 | orchestrator | 2026-02-09 03:52:06 | INFO  | It takes a moment until task dfb54f6f-df78-4168-a8d9-f4e5dbe09de8 (keystone) has been started and output is visible here. 2026-02-09 03:52:13.569776 | orchestrator | 2026-02-09 03:52:13.569921 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 03:52:13.569932 | orchestrator | 2026-02-09 03:52:13.569939 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 03:52:13.569946 | orchestrator | Monday 09 February 2026 03:52:10 +0000 (0:00:00.277) 0:00:00.277 ******* 2026-02-09 03:52:13.569952 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:52:13.569960 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:52:13.569982 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:52:13.569989 | orchestrator | 2026-02-09 03:52:13.569996 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 03:52:13.570003 | orchestrator | Monday 09 February 2026 03:52:11 +0000 (0:00:00.338) 0:00:00.616 ******* 2026-02-09 03:52:13.570010 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-09 03:52:13.570077 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-09 03:52:13.570085 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-09 03:52:13.570092 | orchestrator | 2026-02-09 03:52:13.570099 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-09 03:52:13.570105 | orchestrator | 2026-02-09 03:52:13.570112 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-09 03:52:13.570119 | orchestrator | Monday 09 February 2026 03:52:11 +0000 (0:00:00.470) 0:00:01.086 ******* 2026-02-09 03:52:13.570126 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:52:13.570134 | orchestrator | 2026-02-09 03:52:13.570140 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-09 03:52:13.570147 | orchestrator | Monday 09 February 2026 03:52:12 +0000 (0:00:00.615) 0:00:01.702 ******* 2026-02-09 03:52:13.570158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:13.570168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:13.570190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:13.570210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:52:13.570218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:52:13.570225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:52:13.570232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:13.570239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:13.570246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:13.570253 | orchestrator | 2026-02-09 03:52:13.570260 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-09 03:52:13.570278 | orchestrator | Monday 09 February 2026 03:52:13 +0000 (0:00:01.436) 0:00:03.139 ******* 2026-02-09 03:52:19.320749 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:52:19.320945 | orchestrator | 2026-02-09 03:52:19.320971 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-09 03:52:19.320987 | orchestrator | Monday 09 February 2026 03:52:13 +0000 (0:00:00.311) 0:00:03.451 ******* 2026-02-09 03:52:19.321000 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:52:19.321016 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:52:19.321103 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:52:19.321120 | orchestrator | 2026-02-09 03:52:19.321132 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-09 03:52:19.321143 | orchestrator | Monday 09 February 2026 03:52:14 +0000 (0:00:00.335) 0:00:03.786 ******* 2026-02-09 03:52:19.321154 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 03:52:19.321165 | orchestrator | 2026-02-09 03:52:19.321177 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-09 03:52:19.321189 | orchestrator | Monday 09 February 2026 03:52:15 +0000 (0:00:00.859) 0:00:04.646 ******* 2026-02-09 03:52:19.321201 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:52:19.321214 | orchestrator | 2026-02-09 03:52:19.321225 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-09 03:52:19.321236 | orchestrator | Monday 09 February 2026 03:52:15 +0000 (0:00:00.573) 0:00:05.219 ******* 2026-02-09 03:52:19.321254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:19.321272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:19.321286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:19.321351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:52:19.321369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:52:19.321383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:52:19.321395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:19.321406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:19.321417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:19.321437 | orchestrator | 2026-02-09 03:52:19.321449 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-09 03:52:19.321460 | orchestrator | Monday 09 February 2026 03:52:18 +0000 (0:00:03.063) 0:00:08.283 ******* 2026-02-09 03:52:19.321481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:52:20.145239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:20.145403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:52:20.145429 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:52:20.145450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:52:20.145496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:20.145520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:52:20.145535 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:52:20.145573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:52:20.145591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:20.145606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:52:20.145622 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:52:20.145637 | orchestrator | 2026-02-09 03:52:20.145653 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-09 03:52:20.145679 | orchestrator | Monday 09 February 2026 03:52:19 +0000 (0:00:00.618) 0:00:08.901 ******* 2026-02-09 03:52:20.145696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:52:20.145719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:20.145744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:52:23.382355 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:52:23.382453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:52:23.382467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:23.382523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:52:23.382532 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:52:23.382554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:52:23.382562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:23.382585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:52:23.382593 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:52:23.382601 | orchestrator | 2026-02-09 03:52:23.382609 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-09 03:52:23.382619 | orchestrator | Monday 09 February 2026 03:52:20 +0000 (0:00:00.820) 0:00:09.722 ******* 2026-02-09 03:52:23.382627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:23.382641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:23.382654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:23.382669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:52:28.499939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:52:28.500125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:52:28.500224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:28.500237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:28.500270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:28.500284 | orchestrator | 2026-02-09 03:52:28.500298 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-09 03:52:28.500311 | orchestrator | Monday 09 February 2026 03:52:23 +0000 (0:00:03.230) 0:00:12.953 ******* 2026-02-09 03:52:28.500346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:28.500361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:28.500385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:28.500398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:28.500416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:28.500433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:32.453491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:32.453657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:32.453676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:52:32.453689 | orchestrator | 2026-02-09 03:52:32.453702 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-09 03:52:32.453715 | orchestrator | Monday 09 February 2026 03:52:28 +0000 (0:00:05.118) 0:00:18.071 ******* 2026-02-09 03:52:32.453726 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:52:32.453738 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:52:32.453749 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:52:32.453760 | orchestrator | 2026-02-09 03:52:32.453771 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-09 03:52:32.453782 | orchestrator | Monday 09 February 2026 03:52:29 +0000 (0:00:01.481) 0:00:19.552 ******* 2026-02-09 03:52:32.453793 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:52:32.453803 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:52:32.453874 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:52:32.453893 | orchestrator | 2026-02-09 03:52:32.453911 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-09 03:52:32.453929 | orchestrator | Monday 09 February 2026 03:52:30 +0000 (0:00:00.830) 0:00:20.382 ******* 2026-02-09 03:52:32.453946 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:52:32.453963 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:52:32.453981 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:52:32.454000 | orchestrator | 2026-02-09 03:52:32.454091 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-09 03:52:32.454107 | orchestrator | Monday 09 February 2026 03:52:31 +0000 (0:00:00.624) 0:00:21.007 ******* 2026-02-09 03:52:32.454135 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:52:32.454148 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:52:32.454161 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:52:32.454174 | orchestrator | 2026-02-09 03:52:32.454184 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-09 03:52:32.454196 | orchestrator | Monday 09 February 2026 03:52:31 +0000 (0:00:00.322) 0:00:21.330 ******* 2026-02-09 03:52:32.454233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:52:32.454268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:32.454288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:52:32.454308 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:52:32.454327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:52:32.454355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:32.454376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:52:32.454407 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:52:32.454449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-09 03:52:51.553450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 03:52:51.553574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 03:52:51.553590 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:52:51.553603 | orchestrator | 2026-02-09 03:52:51.553614 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-09 03:52:51.553625 | orchestrator | Monday 09 February 2026 03:52:32 +0000 (0:00:00.699) 0:00:22.030 ******* 2026-02-09 03:52:51.553635 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:52:51.553644 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:52:51.553654 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:52:51.553664 | orchestrator | 2026-02-09 03:52:51.553674 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-09 03:52:51.553683 | orchestrator | Monday 09 February 2026 03:52:32 +0000 (0:00:00.324) 0:00:22.354 ******* 2026-02-09 03:52:51.553693 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-09 03:52:51.553704 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-09 03:52:51.553718 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-09 03:52:51.553734 | orchestrator | 2026-02-09 03:52:51.553778 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-09 03:52:51.553791 | orchestrator | Monday 09 February 2026 03:52:34 +0000 (0:00:01.977) 0:00:24.331 ******* 2026-02-09 03:52:51.553865 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 03:52:51.553878 | orchestrator | 2026-02-09 03:52:51.553887 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-09 03:52:51.553897 | orchestrator | Monday 09 February 2026 03:52:35 +0000 (0:00:01.045) 0:00:25.376 ******* 2026-02-09 03:52:51.553906 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:52:51.553916 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:52:51.553925 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:52:51.553935 | orchestrator | 2026-02-09 03:52:51.553951 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-09 03:52:51.553967 | orchestrator | Monday 09 February 2026 03:52:36 +0000 (0:00:00.612) 0:00:25.989 ******* 2026-02-09 03:52:51.553987 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 03:52:51.554006 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-09 03:52:51.554089 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-09 03:52:51.554102 | orchestrator | 2026-02-09 03:52:51.554114 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-09 03:52:51.554126 | orchestrator | Monday 09 February 2026 03:52:37 +0000 (0:00:01.060) 0:00:27.050 ******* 2026-02-09 03:52:51.554137 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:52:51.554150 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:52:51.554161 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:52:51.554175 | orchestrator | 2026-02-09 03:52:51.554192 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-09 03:52:51.554209 | orchestrator | Monday 09 February 2026 03:52:38 +0000 (0:00:00.564) 0:00:27.614 ******* 2026-02-09 03:52:51.554228 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-09 03:52:51.554246 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-09 03:52:51.554261 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-09 03:52:51.554274 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-09 03:52:51.554285 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-09 03:52:51.554296 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-09 03:52:51.554308 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-09 03:52:51.554319 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-09 03:52:51.554348 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-09 03:52:51.554359 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-09 03:52:51.554368 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-09 03:52:51.554377 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-09 03:52:51.554387 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-09 03:52:51.554397 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-09 03:52:51.554406 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-09 03:52:51.554417 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-09 03:52:51.554433 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-09 03:52:51.554462 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-09 03:52:51.554480 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-09 03:52:51.554498 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-09 03:52:51.554515 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-09 03:52:51.554526 | orchestrator | 2026-02-09 03:52:51.554535 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-09 03:52:51.554544 | orchestrator | Monday 09 February 2026 03:52:46 +0000 (0:00:08.701) 0:00:36.316 ******* 2026-02-09 03:52:51.554554 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-09 03:52:51.554563 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-09 03:52:51.554572 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-09 03:52:51.554582 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-09 03:52:51.554591 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-09 03:52:51.554600 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-09 03:52:51.554610 | orchestrator | 2026-02-09 03:52:51.554619 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-09 03:52:51.554628 | orchestrator | Monday 09 February 2026 03:52:49 +0000 (0:00:02.622) 0:00:38.938 ******* 2026-02-09 03:52:51.554647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:52:51.554669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:54:29.903146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-09 03:54:29.903279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:54:29.903313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:54:29.903326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-09 03:54:29.903338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:54:29.903367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:54:29.903387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-09 03:54:29.903399 | orchestrator | 2026-02-09 03:54:29.903413 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-09 03:54:29.903426 | orchestrator | Monday 09 February 2026 03:52:51 +0000 (0:00:02.187) 0:00:41.125 ******* 2026-02-09 03:54:29.903437 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:54:29.903449 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:54:29.903459 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:54:29.903470 | orchestrator | 2026-02-09 03:54:29.903481 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-09 03:54:29.903492 | orchestrator | Monday 09 February 2026 03:52:52 +0000 (0:00:00.554) 0:00:41.679 ******* 2026-02-09 03:54:29.903503 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:54:29.903514 | orchestrator | 2026-02-09 03:54:29.903525 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-09 03:54:29.903536 | orchestrator | Monday 09 February 2026 03:52:54 +0000 (0:00:02.124) 0:00:43.804 ******* 2026-02-09 03:54:29.903546 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:54:29.903557 | orchestrator | 2026-02-09 03:54:29.903568 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-09 03:54:29.903579 | orchestrator | Monday 09 February 2026 03:52:56 +0000 (0:00:02.127) 0:00:45.932 ******* 2026-02-09 03:54:29.903589 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:54:29.903601 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:54:29.903611 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:54:29.903622 | orchestrator | 2026-02-09 03:54:29.903633 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-09 03:54:29.903644 | orchestrator | Monday 09 February 2026 03:52:57 +0000 (0:00:00.804) 0:00:46.736 ******* 2026-02-09 03:54:29.903655 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:54:29.903665 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:54:29.903701 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:54:29.903725 | orchestrator | 2026-02-09 03:54:29.903739 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-09 03:54:29.903752 | orchestrator | Monday 09 February 2026 03:52:57 +0000 (0:00:00.338) 0:00:47.075 ******* 2026-02-09 03:54:29.903765 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:54:29.903784 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:54:29.903797 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:54:29.903811 | orchestrator | 2026-02-09 03:54:29.903842 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-09 03:54:29.903855 | orchestrator | Monday 09 February 2026 03:52:58 +0000 (0:00:00.562) 0:00:47.638 ******* 2026-02-09 03:54:29.903868 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:54:29.903881 | orchestrator | 2026-02-09 03:54:29.903893 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-09 03:54:29.903906 | orchestrator | Monday 09 February 2026 03:53:10 +0000 (0:00:12.907) 0:01:00.545 ******* 2026-02-09 03:54:29.903919 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:54:29.903932 | orchestrator | 2026-02-09 03:54:29.903944 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-09 03:54:29.903956 | orchestrator | Monday 09 February 2026 03:53:20 +0000 (0:00:09.570) 0:01:10.116 ******* 2026-02-09 03:54:29.903969 | orchestrator | 2026-02-09 03:54:29.903983 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-09 03:54:29.904004 | orchestrator | Monday 09 February 2026 03:53:20 +0000 (0:00:00.070) 0:01:10.186 ******* 2026-02-09 03:54:29.904016 | orchestrator | 2026-02-09 03:54:29.904029 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-09 03:54:29.904042 | orchestrator | Monday 09 February 2026 03:53:20 +0000 (0:00:00.070) 0:01:10.257 ******* 2026-02-09 03:54:29.904055 | orchestrator | 2026-02-09 03:54:29.904068 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-09 03:54:29.904080 | orchestrator | Monday 09 February 2026 03:53:20 +0000 (0:00:00.078) 0:01:10.336 ******* 2026-02-09 03:54:29.904091 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:54:29.904102 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:54:29.904113 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:54:29.904124 | orchestrator | 2026-02-09 03:54:29.904135 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-09 03:54:29.904146 | orchestrator | Monday 09 February 2026 03:54:12 +0000 (0:00:51.275) 0:02:01.611 ******* 2026-02-09 03:54:29.904157 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:54:29.904168 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:54:29.904178 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:54:29.904189 | orchestrator | 2026-02-09 03:54:29.904200 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-09 03:54:29.904211 | orchestrator | Monday 09 February 2026 03:54:17 +0000 (0:00:05.241) 0:02:06.853 ******* 2026-02-09 03:54:29.904222 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:54:29.904233 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:54:29.904244 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:54:29.904254 | orchestrator | 2026-02-09 03:54:29.904265 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-09 03:54:29.904276 | orchestrator | Monday 09 February 2026 03:54:29 +0000 (0:00:12.002) 0:02:18.855 ******* 2026-02-09 03:54:29.904295 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:55:16.686353 | orchestrator | 2026-02-09 03:55:16.686458 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-09 03:55:16.686474 | orchestrator | Monday 09 February 2026 03:54:29 +0000 (0:00:00.625) 0:02:19.481 ******* 2026-02-09 03:55:16.686481 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:55:16.686488 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:55:16.686495 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:55:16.686501 | orchestrator | 2026-02-09 03:55:16.686507 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-09 03:55:16.686514 | orchestrator | Monday 09 February 2026 03:54:31 +0000 (0:00:01.250) 0:02:20.731 ******* 2026-02-09 03:55:16.686520 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:55:16.686526 | orchestrator | 2026-02-09 03:55:16.686532 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-09 03:55:16.686538 | orchestrator | Monday 09 February 2026 03:54:32 +0000 (0:00:01.749) 0:02:22.481 ******* 2026-02-09 03:55:16.686544 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-09 03:55:16.686550 | orchestrator | 2026-02-09 03:55:16.686556 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-09 03:55:16.686562 | orchestrator | Monday 09 February 2026 03:54:42 +0000 (0:00:09.845) 0:02:32.326 ******* 2026-02-09 03:55:16.686568 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-09 03:55:16.686574 | orchestrator | 2026-02-09 03:55:16.686579 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-09 03:55:16.686585 | orchestrator | Monday 09 February 2026 03:55:05 +0000 (0:00:22.755) 0:02:55.082 ******* 2026-02-09 03:55:16.686591 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-09 03:55:16.686598 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-09 03:55:16.686604 | orchestrator | 2026-02-09 03:55:16.686627 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-09 03:55:16.686634 | orchestrator | Monday 09 February 2026 03:55:11 +0000 (0:00:06.190) 0:03:01.272 ******* 2026-02-09 03:55:16.686640 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:55:16.686645 | orchestrator | 2026-02-09 03:55:16.686651 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-09 03:55:16.686657 | orchestrator | Monday 09 February 2026 03:55:11 +0000 (0:00:00.149) 0:03:01.421 ******* 2026-02-09 03:55:16.686663 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:55:16.686668 | orchestrator | 2026-02-09 03:55:16.686674 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-09 03:55:16.686680 | orchestrator | Monday 09 February 2026 03:55:11 +0000 (0:00:00.145) 0:03:01.567 ******* 2026-02-09 03:55:16.686685 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:55:16.686691 | orchestrator | 2026-02-09 03:55:16.686697 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-09 03:55:16.686702 | orchestrator | Monday 09 February 2026 03:55:12 +0000 (0:00:00.146) 0:03:01.714 ******* 2026-02-09 03:55:16.686720 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:55:16.686726 | orchestrator | 2026-02-09 03:55:16.686731 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-09 03:55:16.686737 | orchestrator | Monday 09 February 2026 03:55:12 +0000 (0:00:00.550) 0:03:02.265 ******* 2026-02-09 03:55:16.686743 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:55:16.686749 | orchestrator | 2026-02-09 03:55:16.686754 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-09 03:55:16.686760 | orchestrator | Monday 09 February 2026 03:55:15 +0000 (0:00:03.053) 0:03:05.318 ******* 2026-02-09 03:55:16.686766 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:55:16.686771 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:55:16.686777 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:55:16.686783 | orchestrator | 2026-02-09 03:55:16.686788 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:55:16.686795 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-09 03:55:16.686803 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-09 03:55:16.686809 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-09 03:55:16.686863 | orchestrator | 2026-02-09 03:55:16.686869 | orchestrator | 2026-02-09 03:55:16.686875 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:55:16.686881 | orchestrator | Monday 09 February 2026 03:55:16 +0000 (0:00:00.486) 0:03:05.804 ******* 2026-02-09 03:55:16.686887 | orchestrator | =============================================================================== 2026-02-09 03:55:16.686893 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 51.28s 2026-02-09 03:55:16.686899 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.76s 2026-02-09 03:55:16.686906 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.91s 2026-02-09 03:55:16.686912 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.00s 2026-02-09 03:55:16.686918 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.85s 2026-02-09 03:55:16.686924 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.57s 2026-02-09 03:55:16.686930 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.70s 2026-02-09 03:55:16.686937 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.19s 2026-02-09 03:55:16.686943 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.24s 2026-02-09 03:55:16.686968 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.12s 2026-02-09 03:55:16.686976 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.23s 2026-02-09 03:55:16.686982 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.06s 2026-02-09 03:55:16.686988 | orchestrator | keystone : Creating default user role ----------------------------------- 3.05s 2026-02-09 03:55:16.686994 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.62s 2026-02-09 03:55:16.687000 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.19s 2026-02-09 03:55:16.687006 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.13s 2026-02-09 03:55:16.687012 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.12s 2026-02-09 03:55:16.687019 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.98s 2026-02-09 03:55:16.687025 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.75s 2026-02-09 03:55:16.687031 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.48s 2026-02-09 03:55:19.164259 | orchestrator | 2026-02-09 03:55:19 | INFO  | Task 5972f53a-542c-4cce-b92a-ce71b07eda9f (placement) was prepared for execution. 2026-02-09 03:55:19.164348 | orchestrator | 2026-02-09 03:55:19 | INFO  | It takes a moment until task 5972f53a-542c-4cce-b92a-ce71b07eda9f (placement) has been started and output is visible here. 2026-02-09 03:55:53.033140 | orchestrator | 2026-02-09 03:55:53.033281 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 03:55:53.033304 | orchestrator | 2026-02-09 03:55:53.033317 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 03:55:53.033329 | orchestrator | Monday 09 February 2026 03:55:23 +0000 (0:00:00.262) 0:00:00.262 ******* 2026-02-09 03:55:53.033340 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:55:53.033352 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:55:53.033363 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:55:53.033375 | orchestrator | 2026-02-09 03:55:53.033387 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 03:55:53.033398 | orchestrator | Monday 09 February 2026 03:55:24 +0000 (0:00:00.378) 0:00:00.640 ******* 2026-02-09 03:55:53.033410 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-09 03:55:53.033421 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-09 03:55:53.033432 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-09 03:55:53.033443 | orchestrator | 2026-02-09 03:55:53.033454 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-09 03:55:53.033465 | orchestrator | 2026-02-09 03:55:53.033492 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-09 03:55:53.033504 | orchestrator | Monday 09 February 2026 03:55:24 +0000 (0:00:00.459) 0:00:01.100 ******* 2026-02-09 03:55:53.033516 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:55:53.033527 | orchestrator | 2026-02-09 03:55:53.033538 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-09 03:55:53.033549 | orchestrator | Monday 09 February 2026 03:55:25 +0000 (0:00:00.590) 0:00:01.691 ******* 2026-02-09 03:55:53.033560 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-09 03:55:53.033571 | orchestrator | 2026-02-09 03:55:53.033582 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-09 03:55:53.033593 | orchestrator | Monday 09 February 2026 03:55:28 +0000 (0:00:03.696) 0:00:05.387 ******* 2026-02-09 03:55:53.033604 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-09 03:55:53.033642 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-09 03:55:53.033654 | orchestrator | 2026-02-09 03:55:53.033686 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-09 03:55:53.033699 | orchestrator | Monday 09 February 2026 03:55:34 +0000 (0:00:05.949) 0:00:11.336 ******* 2026-02-09 03:55:53.033713 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-09 03:55:53.033726 | orchestrator | 2026-02-09 03:55:53.033739 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-09 03:55:53.033753 | orchestrator | Monday 09 February 2026 03:55:38 +0000 (0:00:03.454) 0:00:14.791 ******* 2026-02-09 03:55:53.033766 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 03:55:53.033779 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-09 03:55:53.033792 | orchestrator | 2026-02-09 03:55:53.033805 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-09 03:55:53.033818 | orchestrator | Monday 09 February 2026 03:55:42 +0000 (0:00:03.902) 0:00:18.693 ******* 2026-02-09 03:55:53.033831 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 03:55:53.033844 | orchestrator | 2026-02-09 03:55:53.033858 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-09 03:55:53.033871 | orchestrator | Monday 09 February 2026 03:55:45 +0000 (0:00:02.999) 0:00:21.693 ******* 2026-02-09 03:55:53.033884 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-09 03:55:53.033897 | orchestrator | 2026-02-09 03:55:53.033908 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-09 03:55:53.033918 | orchestrator | Monday 09 February 2026 03:55:48 +0000 (0:00:03.892) 0:00:25.585 ******* 2026-02-09 03:55:53.033929 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:55:53.033940 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:55:53.033951 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:55:53.033961 | orchestrator | 2026-02-09 03:55:53.033972 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-09 03:55:53.033983 | orchestrator | Monday 09 February 2026 03:55:49 +0000 (0:00:00.300) 0:00:25.886 ******* 2026-02-09 03:55:53.033997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:55:53.034092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:55:53.034115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:55:53.034136 | orchestrator | 2026-02-09 03:55:53.034148 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-09 03:55:53.034159 | orchestrator | Monday 09 February 2026 03:55:50 +0000 (0:00:00.821) 0:00:26.708 ******* 2026-02-09 03:55:53.034170 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:55:53.034181 | orchestrator | 2026-02-09 03:55:53.034191 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-09 03:55:53.034202 | orchestrator | Monday 09 February 2026 03:55:50 +0000 (0:00:00.352) 0:00:27.060 ******* 2026-02-09 03:55:53.034213 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:55:53.034224 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:55:53.034235 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:55:53.034246 | orchestrator | 2026-02-09 03:55:53.034256 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-09 03:55:53.034267 | orchestrator | Monday 09 February 2026 03:55:50 +0000 (0:00:00.307) 0:00:27.368 ******* 2026-02-09 03:55:53.034282 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 03:55:53.034302 | orchestrator | 2026-02-09 03:55:53.034322 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-09 03:55:53.034343 | orchestrator | Monday 09 February 2026 03:55:51 +0000 (0:00:00.681) 0:00:28.050 ******* 2026-02-09 03:55:53.034365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:55:53.034399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:55:55.885937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:55:55.886069 | orchestrator | 2026-02-09 03:55:55.886085 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-09 03:55:55.886095 | orchestrator | Monday 09 February 2026 03:55:53 +0000 (0:00:01.583) 0:00:29.633 ******* 2026-02-09 03:55:55.886105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:55:55.886114 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:55:55.886123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:55:55.886132 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:55:55.886140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:55:55.886168 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:55:55.886176 | orchestrator | 2026-02-09 03:55:55.886184 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-09 03:55:55.886206 | orchestrator | Monday 09 February 2026 03:55:53 +0000 (0:00:00.512) 0:00:30.146 ******* 2026-02-09 03:55:55.886221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:55:55.886229 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:55:55.886238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:55:55.886246 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:55:55.886254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:55:55.886262 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:55:55.886270 | orchestrator | 2026-02-09 03:55:55.886278 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-09 03:55:55.886286 | orchestrator | Monday 09 February 2026 03:55:54 +0000 (0:00:00.768) 0:00:30.914 ******* 2026-02-09 03:55:55.886294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:55:55.886319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:56:02.744701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:56:02.744809 | orchestrator | 2026-02-09 03:56:02.744828 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-09 03:56:02.744841 | orchestrator | Monday 09 February 2026 03:55:55 +0000 (0:00:01.580) 0:00:32.494 ******* 2026-02-09 03:56:02.744853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:56:02.744866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:56:02.744917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:56:02.744936 | orchestrator | 2026-02-09 03:56:02.744955 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-09 03:56:02.744973 | orchestrator | Monday 09 February 2026 03:55:58 +0000 (0:00:02.315) 0:00:34.810 ******* 2026-02-09 03:56:02.745010 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-09 03:56:02.745032 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-09 03:56:02.745049 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-09 03:56:02.745068 | orchestrator | 2026-02-09 03:56:02.745086 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-09 03:56:02.745104 | orchestrator | Monday 09 February 2026 03:55:59 +0000 (0:00:01.427) 0:00:36.237 ******* 2026-02-09 03:56:02.745122 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:56:02.745141 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:56:02.745158 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:56:02.745175 | orchestrator | 2026-02-09 03:56:02.745192 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-09 03:56:02.745211 | orchestrator | Monday 09 February 2026 03:56:00 +0000 (0:00:01.328) 0:00:37.565 ******* 2026-02-09 03:56:02.745232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:56:02.745252 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:56:02.745272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:56:02.745308 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:56:02.745328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-09 03:56:02.745347 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:56:02.745367 | orchestrator | 2026-02-09 03:56:02.745384 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-09 03:56:02.745403 | orchestrator | Monday 09 February 2026 03:56:01 +0000 (0:00:00.756) 0:00:38.322 ******* 2026-02-09 03:56:02.745451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:56:27.583828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:56:27.583953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-09 03:56:27.583991 | orchestrator | 2026-02-09 03:56:27.584002 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-09 03:56:27.584012 | orchestrator | Monday 09 February 2026 03:56:02 +0000 (0:00:01.033) 0:00:39.356 ******* 2026-02-09 03:56:27.584021 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:56:27.584030 | orchestrator | 2026-02-09 03:56:27.584038 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-09 03:56:27.584045 | orchestrator | Monday 09 February 2026 03:56:04 +0000 (0:00:01.933) 0:00:41.289 ******* 2026-02-09 03:56:27.584052 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:56:27.584060 | orchestrator | 2026-02-09 03:56:27.584068 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-09 03:56:27.584075 | orchestrator | Monday 09 February 2026 03:56:06 +0000 (0:00:02.061) 0:00:43.350 ******* 2026-02-09 03:56:27.584083 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:56:27.584090 | orchestrator | 2026-02-09 03:56:27.584097 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-09 03:56:27.584105 | orchestrator | Monday 09 February 2026 03:56:19 +0000 (0:00:12.771) 0:00:56.122 ******* 2026-02-09 03:56:27.584112 | orchestrator | 2026-02-09 03:56:27.584119 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-09 03:56:27.584127 | orchestrator | Monday 09 February 2026 03:56:19 +0000 (0:00:00.084) 0:00:56.207 ******* 2026-02-09 03:56:27.584134 | orchestrator | 2026-02-09 03:56:27.584141 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-09 03:56:27.584148 | orchestrator | Monday 09 February 2026 03:56:19 +0000 (0:00:00.073) 0:00:56.280 ******* 2026-02-09 03:56:27.584156 | orchestrator | 2026-02-09 03:56:27.584163 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-09 03:56:27.584170 | orchestrator | Monday 09 February 2026 03:56:19 +0000 (0:00:00.069) 0:00:56.350 ******* 2026-02-09 03:56:27.584177 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:56:27.584185 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:56:27.584192 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:56:27.584199 | orchestrator | 2026-02-09 03:56:27.584233 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 03:56:27.584243 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 03:56:27.584253 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 03:56:27.584260 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 03:56:27.584268 | orchestrator | 2026-02-09 03:56:27.584275 | orchestrator | 2026-02-09 03:56:27.584282 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 03:56:27.584290 | orchestrator | Monday 09 February 2026 03:56:27 +0000 (0:00:07.475) 0:01:03.826 ******* 2026-02-09 03:56:27.584297 | orchestrator | =============================================================================== 2026-02-09 03:56:27.584304 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.77s 2026-02-09 03:56:27.584348 | orchestrator | placement : Restart placement-api container ----------------------------- 7.48s 2026-02-09 03:56:27.584366 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 5.95s 2026-02-09 03:56:27.584380 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.90s 2026-02-09 03:56:27.584393 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.89s 2026-02-09 03:56:27.584406 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.70s 2026-02-09 03:56:27.584418 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.45s 2026-02-09 03:56:27.584431 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.00s 2026-02-09 03:56:27.584466 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.32s 2026-02-09 03:56:27.584478 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.06s 2026-02-09 03:56:27.584491 | orchestrator | placement : Creating placement databases -------------------------------- 1.93s 2026-02-09 03:56:27.584505 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.58s 2026-02-09 03:56:27.584517 | orchestrator | placement : Copying over config.json files for services ----------------- 1.58s 2026-02-09 03:56:27.584530 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.43s 2026-02-09 03:56:27.584542 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.33s 2026-02-09 03:56:27.584550 | orchestrator | placement : Check placement containers ---------------------------------- 1.03s 2026-02-09 03:56:27.584558 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.82s 2026-02-09 03:56:27.584567 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.77s 2026-02-09 03:56:27.584575 | orchestrator | placement : Copying over existing policy file --------------------------- 0.76s 2026-02-09 03:56:27.584583 | orchestrator | placement : include_tasks ----------------------------------------------- 0.68s 2026-02-09 03:56:29.985911 | orchestrator | 2026-02-09 03:56:29 | INFO  | Task b088f92e-eca3-4e47-ad97-fb5e9daae3fa (neutron) was prepared for execution. 2026-02-09 03:56:29.986064 | orchestrator | 2026-02-09 03:56:29 | INFO  | It takes a moment until task b088f92e-eca3-4e47-ad97-fb5e9daae3fa (neutron) has been started and output is visible here. 2026-02-09 03:57:17.522450 | orchestrator | 2026-02-09 03:57:17.522580 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 03:57:17.522603 | orchestrator | 2026-02-09 03:57:17.522615 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 03:57:17.522626 | orchestrator | Monday 09 February 2026 03:56:34 +0000 (0:00:00.283) 0:00:00.283 ******* 2026-02-09 03:57:17.522636 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:57:17.522646 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:57:17.522656 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:57:17.522666 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:57:17.522675 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:57:17.522685 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:57:17.522694 | orchestrator | 2026-02-09 03:57:17.522704 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 03:57:17.522714 | orchestrator | Monday 09 February 2026 03:56:35 +0000 (0:00:00.742) 0:00:01.026 ******* 2026-02-09 03:57:17.522724 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-09 03:57:17.522733 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-09 03:57:17.522743 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-09 03:57:17.522752 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-09 03:57:17.522761 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-09 03:57:17.522771 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-09 03:57:17.522780 | orchestrator | 2026-02-09 03:57:17.522813 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-09 03:57:17.522823 | orchestrator | 2026-02-09 03:57:17.522834 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-09 03:57:17.522850 | orchestrator | Monday 09 February 2026 03:56:35 +0000 (0:00:00.681) 0:00:01.707 ******* 2026-02-09 03:57:17.522867 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:57:17.522882 | orchestrator | 2026-02-09 03:57:17.522915 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-09 03:57:17.522932 | orchestrator | Monday 09 February 2026 03:56:37 +0000 (0:00:01.393) 0:00:03.101 ******* 2026-02-09 03:57:17.522947 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:57:17.522962 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:57:17.522979 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:57:17.522995 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:57:17.523011 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:57:17.523027 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:57:17.523045 | orchestrator | 2026-02-09 03:57:17.523062 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-09 03:57:17.523079 | orchestrator | Monday 09 February 2026 03:56:38 +0000 (0:00:01.390) 0:00:04.492 ******* 2026-02-09 03:57:17.523095 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:57:17.523113 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:57:17.523129 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:57:17.523146 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:57:17.523156 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:57:17.523165 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:57:17.523175 | orchestrator | 2026-02-09 03:57:17.523184 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-09 03:57:17.523272 | orchestrator | Monday 09 February 2026 03:56:39 +0000 (0:00:01.091) 0:00:05.584 ******* 2026-02-09 03:57:17.523286 | orchestrator | ok: [testbed-node-0] => { 2026-02-09 03:57:17.523297 | orchestrator |  "changed": false, 2026-02-09 03:57:17.523307 | orchestrator |  "msg": "All assertions passed" 2026-02-09 03:57:17.523317 | orchestrator | } 2026-02-09 03:57:17.523326 | orchestrator | ok: [testbed-node-1] => { 2026-02-09 03:57:17.523336 | orchestrator |  "changed": false, 2026-02-09 03:57:17.523345 | orchestrator |  "msg": "All assertions passed" 2026-02-09 03:57:17.523355 | orchestrator | } 2026-02-09 03:57:17.523365 | orchestrator | ok: [testbed-node-2] => { 2026-02-09 03:57:17.523375 | orchestrator |  "changed": false, 2026-02-09 03:57:17.523384 | orchestrator |  "msg": "All assertions passed" 2026-02-09 03:57:17.523394 | orchestrator | } 2026-02-09 03:57:17.523403 | orchestrator | ok: [testbed-node-3] => { 2026-02-09 03:57:17.523413 | orchestrator |  "changed": false, 2026-02-09 03:57:17.523422 | orchestrator |  "msg": "All assertions passed" 2026-02-09 03:57:17.523431 | orchestrator | } 2026-02-09 03:57:17.523441 | orchestrator | ok: [testbed-node-4] => { 2026-02-09 03:57:17.523450 | orchestrator |  "changed": false, 2026-02-09 03:57:17.523460 | orchestrator |  "msg": "All assertions passed" 2026-02-09 03:57:17.523470 | orchestrator | } 2026-02-09 03:57:17.523480 | orchestrator | ok: [testbed-node-5] => { 2026-02-09 03:57:17.523489 | orchestrator |  "changed": false, 2026-02-09 03:57:17.523499 | orchestrator |  "msg": "All assertions passed" 2026-02-09 03:57:17.523508 | orchestrator | } 2026-02-09 03:57:17.523518 | orchestrator | 2026-02-09 03:57:17.523527 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-09 03:57:17.523537 | orchestrator | Monday 09 February 2026 03:56:40 +0000 (0:00:00.872) 0:00:06.457 ******* 2026-02-09 03:57:17.523547 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:57:17.523556 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:57:17.523565 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:57:17.523575 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:57:17.523584 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:57:17.523605 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:57:17.523615 | orchestrator | 2026-02-09 03:57:17.523625 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-09 03:57:17.523634 | orchestrator | Monday 09 February 2026 03:56:41 +0000 (0:00:00.671) 0:00:07.129 ******* 2026-02-09 03:57:17.523644 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-09 03:57:17.523654 | orchestrator | 2026-02-09 03:57:17.523663 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-09 03:57:17.523673 | orchestrator | Monday 09 February 2026 03:56:44 +0000 (0:00:03.687) 0:00:10.816 ******* 2026-02-09 03:57:17.523682 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-09 03:57:17.523693 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-09 03:57:17.523703 | orchestrator | 2026-02-09 03:57:17.523732 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-09 03:57:17.523742 | orchestrator | Monday 09 February 2026 03:56:51 +0000 (0:00:06.235) 0:00:17.052 ******* 2026-02-09 03:57:17.523752 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 03:57:17.523761 | orchestrator | 2026-02-09 03:57:17.523771 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-09 03:57:17.523780 | orchestrator | Monday 09 February 2026 03:56:54 +0000 (0:00:03.025) 0:00:20.078 ******* 2026-02-09 03:57:17.523790 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 03:57:17.523799 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-09 03:57:17.523808 | orchestrator | 2026-02-09 03:57:17.523817 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-09 03:57:17.523825 | orchestrator | Monday 09 February 2026 03:56:58 +0000 (0:00:03.809) 0:00:23.887 ******* 2026-02-09 03:57:17.523834 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 03:57:17.523842 | orchestrator | 2026-02-09 03:57:17.523851 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-09 03:57:17.523859 | orchestrator | Monday 09 February 2026 03:57:01 +0000 (0:00:02.988) 0:00:26.876 ******* 2026-02-09 03:57:17.523868 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-09 03:57:17.523876 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-09 03:57:17.523885 | orchestrator | 2026-02-09 03:57:17.523893 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-09 03:57:17.523902 | orchestrator | Monday 09 February 2026 03:57:08 +0000 (0:00:07.217) 0:00:34.094 ******* 2026-02-09 03:57:17.523910 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:57:17.523919 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:57:17.523927 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:57:17.523936 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:57:17.523944 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:57:17.523953 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:57:17.523961 | orchestrator | 2026-02-09 03:57:17.523970 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-09 03:57:17.523985 | orchestrator | Monday 09 February 2026 03:57:09 +0000 (0:00:00.841) 0:00:34.935 ******* 2026-02-09 03:57:17.523995 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:57:17.524003 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:57:17.524012 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:57:17.524020 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:57:17.524029 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:57:17.524037 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:57:17.524046 | orchestrator | 2026-02-09 03:57:17.524054 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-09 03:57:17.524063 | orchestrator | Monday 09 February 2026 03:57:11 +0000 (0:00:02.144) 0:00:37.079 ******* 2026-02-09 03:57:17.524072 | orchestrator | ok: [testbed-node-0] 2026-02-09 03:57:17.524086 | orchestrator | ok: [testbed-node-1] 2026-02-09 03:57:17.524095 | orchestrator | ok: [testbed-node-2] 2026-02-09 03:57:17.524104 | orchestrator | ok: [testbed-node-3] 2026-02-09 03:57:17.524112 | orchestrator | ok: [testbed-node-4] 2026-02-09 03:57:17.524122 | orchestrator | ok: [testbed-node-5] 2026-02-09 03:57:17.524137 | orchestrator | 2026-02-09 03:57:17.524150 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-09 03:57:17.524164 | orchestrator | Monday 09 February 2026 03:57:12 +0000 (0:00:01.168) 0:00:38.248 ******* 2026-02-09 03:57:17.524178 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:57:17.524216 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:57:17.524232 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:57:17.524245 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:57:17.524259 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:57:17.524272 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:57:17.524281 | orchestrator | 2026-02-09 03:57:17.524289 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-09 03:57:17.524298 | orchestrator | Monday 09 February 2026 03:57:14 +0000 (0:00:02.433) 0:00:40.682 ******* 2026-02-09 03:57:17.524310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:17.524333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:23.181249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:23.181385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:23.181425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:23.181440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:23.181452 | orchestrator | 2026-02-09 03:57:23.181462 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-09 03:57:23.181469 | orchestrator | Monday 09 February 2026 03:57:17 +0000 (0:00:02.701) 0:00:43.383 ******* 2026-02-09 03:57:23.181475 | orchestrator | [WARNING]: Skipped 2026-02-09 03:57:23.181484 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-09 03:57:23.181495 | orchestrator | due to this access issue: 2026-02-09 03:57:23.181508 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-09 03:57:23.181518 | orchestrator | a directory 2026-02-09 03:57:23.181529 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 03:57:23.181539 | orchestrator | 2026-02-09 03:57:23.181549 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-09 03:57:23.181560 | orchestrator | Monday 09 February 2026 03:57:18 +0000 (0:00:00.842) 0:00:44.226 ******* 2026-02-09 03:57:23.181572 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 03:57:23.181584 | orchestrator | 2026-02-09 03:57:23.181594 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-09 03:57:23.181620 | orchestrator | Monday 09 February 2026 03:57:19 +0000 (0:00:01.323) 0:00:45.550 ******* 2026-02-09 03:57:23.181632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:23.181658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:23.181669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:23.181680 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:23.181699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:28.116032 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:28.116196 | orchestrator | 2026-02-09 03:57:28.116214 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-09 03:57:28.116236 | orchestrator | Monday 09 February 2026 03:57:23 +0000 (0:00:03.493) 0:00:49.044 ******* 2026-02-09 03:57:28.116246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:28.116254 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:57:28.116261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:28.116268 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:57:28.116274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:28.116281 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:57:28.116303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:28.116321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:28.116328 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:57:28.116334 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:57:28.116342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:28.116349 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:57:28.116355 | orchestrator | 2026-02-09 03:57:28.116361 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-09 03:57:28.116368 | orchestrator | Monday 09 February 2026 03:57:25 +0000 (0:00:02.058) 0:00:51.103 ******* 2026-02-09 03:57:28.116375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:28.116381 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:57:28.116393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:33.928164 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:57:33.928288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:33.928310 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:57:33.928324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:33.928336 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:57:33.928348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:33.928359 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:57:33.928371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:33.928433 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:57:33.928446 | orchestrator | 2026-02-09 03:57:33.928458 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-09 03:57:33.928471 | orchestrator | Monday 09 February 2026 03:57:28 +0000 (0:00:02.876) 0:00:53.980 ******* 2026-02-09 03:57:33.928481 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:57:33.928492 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:57:33.928503 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:57:33.928514 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:57:33.928525 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:57:33.928536 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:57:33.928547 | orchestrator | 2026-02-09 03:57:33.928558 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-09 03:57:33.928569 | orchestrator | Monday 09 February 2026 03:57:30 +0000 (0:00:02.325) 0:00:56.305 ******* 2026-02-09 03:57:33.928580 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:57:33.928591 | orchestrator | 2026-02-09 03:57:33.928602 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-09 03:57:33.928629 | orchestrator | Monday 09 February 2026 03:57:30 +0000 (0:00:00.157) 0:00:56.462 ******* 2026-02-09 03:57:33.928641 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:57:33.928651 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:57:33.928662 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:57:33.928675 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:57:33.928688 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:57:33.928701 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:57:33.928714 | orchestrator | 2026-02-09 03:57:33.928727 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-09 03:57:33.928740 | orchestrator | Monday 09 February 2026 03:57:31 +0000 (0:00:00.626) 0:00:57.089 ******* 2026-02-09 03:57:33.928758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:33.928773 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:57:33.928786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:33.928800 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:57:33.928813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:33.928835 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:57:33.928849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:33.928862 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:57:33.928884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:42.376512 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:57:42.376610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:42.376623 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:57:42.376630 | orchestrator | 2026-02-09 03:57:42.376637 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-09 03:57:42.376645 | orchestrator | Monday 09 February 2026 03:57:33 +0000 (0:00:02.699) 0:00:59.788 ******* 2026-02-09 03:57:42.376653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:42.376677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:42.376685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:42.376708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:42.376716 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:42.376723 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:42.376735 | orchestrator | 2026-02-09 03:57:42.376742 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-09 03:57:42.376748 | orchestrator | Monday 09 February 2026 03:57:37 +0000 (0:00:03.223) 0:01:03.012 ******* 2026-02-09 03:57:42.376755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:42.376762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:42.376779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:47.153607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:57:47.153724 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:47.153738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:57:47.153746 | orchestrator | 2026-02-09 03:57:47.153754 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-09 03:57:47.153762 | orchestrator | Monday 09 February 2026 03:57:42 +0000 (0:00:05.226) 0:01:08.238 ******* 2026-02-09 03:57:47.153768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:47.153804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:47.153817 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:57:47.153825 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:57:47.153833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:57:47.153839 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:57:47.153846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:47.153852 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:57:47.153858 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:47.153865 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:57:47.153874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:57:47.153881 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:57:47.153887 | orchestrator | 2026-02-09 03:57:47.153893 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-09 03:57:47.153899 | orchestrator | Monday 09 February 2026 03:57:44 +0000 (0:00:02.199) 0:01:10.437 ******* 2026-02-09 03:57:47.153905 | orchestrator | changed: [testbed-node-0] 2026-02-09 03:57:47.153917 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:57:47.153924 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:57:47.153931 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:57:47.153937 | orchestrator | changed: [testbed-node-1] 2026-02-09 03:57:47.153948 | orchestrator | changed: [testbed-node-2] 2026-02-09 03:58:06.987330 | orchestrator | 2026-02-09 03:58:06.987445 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-09 03:58:06.987462 | orchestrator | Monday 09 February 2026 03:57:47 +0000 (0:00:02.572) 0:01:13.010 ******* 2026-02-09 03:58:06.987475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:06.987523 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:06.987539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:06.987549 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:06.987559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:06.987569 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:06.987580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:58:06.987643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:58:06.987656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:58:06.987665 | orchestrator | 2026-02-09 03:58:06.987673 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-09 03:58:06.987682 | orchestrator | Monday 09 February 2026 03:57:50 +0000 (0:00:03.677) 0:01:16.687 ******* 2026-02-09 03:58:06.987691 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:06.987700 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:06.987708 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:06.987716 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:06.987724 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:06.987734 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:06.987743 | orchestrator | 2026-02-09 03:58:06.987752 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-09 03:58:06.987762 | orchestrator | Monday 09 February 2026 03:57:52 +0000 (0:00:02.136) 0:01:18.824 ******* 2026-02-09 03:58:06.987770 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:06.987780 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:06.987789 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:06.987796 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:06.987806 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:06.987813 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:06.987819 | orchestrator | 2026-02-09 03:58:06.987824 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-09 03:58:06.987830 | orchestrator | Monday 09 February 2026 03:57:55 +0000 (0:00:02.525) 0:01:21.350 ******* 2026-02-09 03:58:06.987835 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:06.987841 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:06.987848 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:06.987856 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:06.987864 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:06.987873 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:06.987882 | orchestrator | 2026-02-09 03:58:06.987891 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-09 03:58:06.987901 | orchestrator | Monday 09 February 2026 03:57:57 +0000 (0:00:02.156) 0:01:23.507 ******* 2026-02-09 03:58:06.987918 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:06.987928 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:06.987937 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:06.987946 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:06.987956 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:06.987966 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:06.988015 | orchestrator | 2026-02-09 03:58:06.988022 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-09 03:58:06.988029 | orchestrator | Monday 09 February 2026 03:57:59 +0000 (0:00:02.130) 0:01:25.637 ******* 2026-02-09 03:58:06.988036 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:06.988042 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:06.988048 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:06.988054 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:06.988061 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:06.988067 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:06.988073 | orchestrator | 2026-02-09 03:58:06.988079 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-09 03:58:06.988086 | orchestrator | Monday 09 February 2026 03:58:01 +0000 (0:00:02.216) 0:01:27.854 ******* 2026-02-09 03:58:06.988092 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:06.988098 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:06.988105 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:06.988112 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:06.988118 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:06.988125 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:06.988131 | orchestrator | 2026-02-09 03:58:06.988143 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-09 03:58:06.988149 | orchestrator | Monday 09 February 2026 03:58:04 +0000 (0:00:02.414) 0:01:30.269 ******* 2026-02-09 03:58:06.988154 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-09 03:58:06.988161 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:06.988166 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-09 03:58:06.988172 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:06.988177 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-09 03:58:06.988189 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:11.504547 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-09 03:58:11.504645 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:11.504655 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-09 03:58:11.504660 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:11.504664 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-09 03:58:11.504668 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:11.504674 | orchestrator | 2026-02-09 03:58:11.504681 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-09 03:58:11.504688 | orchestrator | Monday 09 February 2026 03:58:06 +0000 (0:00:02.577) 0:01:32.846 ******* 2026-02-09 03:58:11.504698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:58:11.504728 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:11.504736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:58:11.504742 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:11.504749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:58:11.504755 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:11.504784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:11.504790 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:11.504794 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:11.504798 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:11.504802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:11.504810 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:11.504813 | orchestrator | 2026-02-09 03:58:11.504817 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-09 03:58:11.504821 | orchestrator | Monday 09 February 2026 03:58:09 +0000 (0:00:02.347) 0:01:35.194 ******* 2026-02-09 03:58:11.504825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:58:11.504829 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:11.504836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:58:11.504840 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:11.504848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:58:39.286470 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:39.286549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:39.286577 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:39.286584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:39.286589 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:39.286643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:39.286653 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:39.286661 | orchestrator | 2026-02-09 03:58:39.286670 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-09 03:58:39.286680 | orchestrator | Monday 09 February 2026 03:58:11 +0000 (0:00:02.173) 0:01:37.367 ******* 2026-02-09 03:58:39.286688 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:39.286696 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:39.286703 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:39.286711 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:39.286719 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:39.286727 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:39.286734 | orchestrator | 2026-02-09 03:58:39.286738 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-09 03:58:39.286754 | orchestrator | Monday 09 February 2026 03:58:13 +0000 (0:00:02.264) 0:01:39.632 ******* 2026-02-09 03:58:39.286759 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:39.286764 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:39.286769 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:39.286774 | orchestrator | changed: [testbed-node-3] 2026-02-09 03:58:39.286778 | orchestrator | changed: [testbed-node-4] 2026-02-09 03:58:39.286783 | orchestrator | changed: [testbed-node-5] 2026-02-09 03:58:39.286788 | orchestrator | 2026-02-09 03:58:39.286792 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-09 03:58:39.286797 | orchestrator | Monday 09 February 2026 03:58:17 +0000 (0:00:03.900) 0:01:43.532 ******* 2026-02-09 03:58:39.286808 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:39.286813 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:39.286818 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:39.286822 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:39.286827 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:39.286832 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:39.286837 | orchestrator | 2026-02-09 03:58:39.286886 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-09 03:58:39.286893 | orchestrator | Monday 09 February 2026 03:58:19 +0000 (0:00:02.156) 0:01:45.688 ******* 2026-02-09 03:58:39.286898 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:39.286903 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:39.286907 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:39.286912 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:39.286917 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:39.286921 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:39.286926 | orchestrator | 2026-02-09 03:58:39.286931 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-09 03:58:39.286950 | orchestrator | Monday 09 February 2026 03:58:22 +0000 (0:00:02.543) 0:01:48.231 ******* 2026-02-09 03:58:39.286955 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:39.286959 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:39.286964 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:39.286969 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:39.286974 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:39.286981 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:39.286989 | orchestrator | 2026-02-09 03:58:39.286997 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-09 03:58:39.287005 | orchestrator | Monday 09 February 2026 03:58:24 +0000 (0:00:02.309) 0:01:50.541 ******* 2026-02-09 03:58:39.287013 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:39.287021 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:39.287029 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:39.287038 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:39.287047 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:39.287055 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:39.287063 | orchestrator | 2026-02-09 03:58:39.287071 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-09 03:58:39.287079 | orchestrator | Monday 09 February 2026 03:58:27 +0000 (0:00:02.458) 0:01:53.000 ******* 2026-02-09 03:58:39.287085 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:39.287091 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:39.287096 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:39.287102 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:39.287108 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:39.287114 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:39.287119 | orchestrator | 2026-02-09 03:58:39.287125 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-09 03:58:39.287130 | orchestrator | Monday 09 February 2026 03:58:29 +0000 (0:00:02.267) 0:01:55.267 ******* 2026-02-09 03:58:39.287136 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:39.287141 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:39.287147 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:39.287152 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:39.287158 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:39.287163 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:39.287169 | orchestrator | 2026-02-09 03:58:39.287174 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-09 03:58:39.287180 | orchestrator | Monday 09 February 2026 03:58:31 +0000 (0:00:02.276) 0:01:57.544 ******* 2026-02-09 03:58:39.287186 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:39.287191 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:39.287197 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:39.287208 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:39.287214 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:39.287219 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:39.287225 | orchestrator | 2026-02-09 03:58:39.287230 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-09 03:58:39.287236 | orchestrator | Monday 09 February 2026 03:58:34 +0000 (0:00:02.709) 0:02:00.253 ******* 2026-02-09 03:58:39.287242 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-09 03:58:39.287248 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:39.287254 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-09 03:58:39.287259 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:39.287265 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-09 03:58:39.287271 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:39.287276 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-09 03:58:39.287282 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:39.287288 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-09 03:58:39.287293 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:39.287299 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-09 03:58:39.287305 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:39.287310 | orchestrator | 2026-02-09 03:58:39.287316 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-09 03:58:39.287330 | orchestrator | Monday 09 February 2026 03:58:36 +0000 (0:00:02.186) 0:02:02.439 ******* 2026-02-09 03:58:39.287339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:58:39.287349 | orchestrator | skipping: [testbed-node-1] 2026-02-09 03:58:39.287365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:58:42.263125 | orchestrator | skipping: [testbed-node-0] 2026-02-09 03:58:42.263232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-09 03:58:42.263273 | orchestrator | skipping: [testbed-node-2] 2026-02-09 03:58:42.263285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:42.263295 | orchestrator | skipping: [testbed-node-3] 2026-02-09 03:58:42.263316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:42.263326 | orchestrator | skipping: [testbed-node-4] 2026-02-09 03:58:42.263335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 03:58:42.263344 | orchestrator | skipping: [testbed-node-5] 2026-02-09 03:58:42.263354 | orchestrator | 2026-02-09 03:58:42.263364 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-09 03:58:42.263374 | orchestrator | Monday 09 February 2026 03:58:39 +0000 (0:00:02.709) 0:02:05.148 ******* 2026-02-09 03:58:42.263398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:58:42.263417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:58:42.263432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-09 03:58:42.263441 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:58:42.263452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 03:58:42.263467 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-09 04:00:51.555070 | orchestrator | 2026-02-09 04:00:51.555161 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-09 04:00:51.555173 | orchestrator | Monday 09 February 2026 03:58:42 +0000 (0:00:02.976) 0:02:08.125 ******* 2026-02-09 04:00:51.555180 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:00:51.555186 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:00:51.555190 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:00:51.555194 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:00:51.555198 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:00:51.555202 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:00:51.555206 | orchestrator | 2026-02-09 04:00:51.555210 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-09 04:00:51.555214 | orchestrator | Monday 09 February 2026 03:58:43 +0000 (0:00:00.875) 0:02:09.001 ******* 2026-02-09 04:00:51.555218 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:00:51.555222 | orchestrator | 2026-02-09 04:00:51.555225 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-09 04:00:51.555229 | orchestrator | Monday 09 February 2026 03:58:45 +0000 (0:00:02.026) 0:02:11.027 ******* 2026-02-09 04:00:51.555233 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:00:51.555237 | orchestrator | 2026-02-09 04:00:51.555241 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-09 04:00:51.555244 | orchestrator | Monday 09 February 2026 03:58:47 +0000 (0:00:02.201) 0:02:13.229 ******* 2026-02-09 04:00:51.555248 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:00:51.555252 | orchestrator | 2026-02-09 04:00:51.555256 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-09 04:00:51.555259 | orchestrator | Monday 09 February 2026 03:59:24 +0000 (0:00:37.573) 0:02:50.803 ******* 2026-02-09 04:00:51.555264 | orchestrator | 2026-02-09 04:00:51.555268 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-09 04:00:51.555271 | orchestrator | Monday 09 February 2026 03:59:25 +0000 (0:00:00.075) 0:02:50.878 ******* 2026-02-09 04:00:51.555275 | orchestrator | 2026-02-09 04:00:51.555279 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-09 04:00:51.555283 | orchestrator | Monday 09 February 2026 03:59:25 +0000 (0:00:00.072) 0:02:50.951 ******* 2026-02-09 04:00:51.555286 | orchestrator | 2026-02-09 04:00:51.555290 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-09 04:00:51.555294 | orchestrator | Monday 09 February 2026 03:59:25 +0000 (0:00:00.070) 0:02:51.022 ******* 2026-02-09 04:00:51.555297 | orchestrator | 2026-02-09 04:00:51.555301 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-09 04:00:51.555305 | orchestrator | Monday 09 February 2026 03:59:25 +0000 (0:00:00.070) 0:02:51.092 ******* 2026-02-09 04:00:51.555309 | orchestrator | 2026-02-09 04:00:51.555324 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-09 04:00:51.555328 | orchestrator | Monday 09 February 2026 03:59:25 +0000 (0:00:00.075) 0:02:51.167 ******* 2026-02-09 04:00:51.555331 | orchestrator | 2026-02-09 04:00:51.555335 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-09 04:00:51.555339 | orchestrator | Monday 09 February 2026 03:59:25 +0000 (0:00:00.072) 0:02:51.240 ******* 2026-02-09 04:00:51.555343 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:00:51.555360 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:00:51.555364 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:00:51.555368 | orchestrator | 2026-02-09 04:00:51.555428 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-09 04:00:51.555432 | orchestrator | Monday 09 February 2026 03:59:47 +0000 (0:00:22.030) 0:03:13.270 ******* 2026-02-09 04:00:51.555435 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:00:51.555439 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:00:51.555443 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:00:51.555446 | orchestrator | 2026-02-09 04:00:51.555450 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:00:51.555455 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-09 04:00:51.555461 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-09 04:00:51.555464 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-09 04:00:51.555469 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-09 04:00:51.555472 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-09 04:00:51.555476 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-09 04:00:51.555480 | orchestrator | 2026-02-09 04:00:51.555484 | orchestrator | 2026-02-09 04:00:51.555487 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:00:51.555491 | orchestrator | Monday 09 February 2026 04:00:51 +0000 (0:01:03.611) 0:04:16.882 ******* 2026-02-09 04:00:51.555495 | orchestrator | =============================================================================== 2026-02-09 04:00:51.555499 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 63.61s 2026-02-09 04:00:51.555503 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 37.57s 2026-02-09 04:00:51.555506 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.03s 2026-02-09 04:00:51.555520 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.22s 2026-02-09 04:00:51.555524 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.24s 2026-02-09 04:00:51.555528 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.23s 2026-02-09 04:00:51.555532 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.90s 2026-02-09 04:00:51.555535 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.81s 2026-02-09 04:00:51.555539 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.69s 2026-02-09 04:00:51.555543 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.68s 2026-02-09 04:00:51.555547 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.49s 2026-02-09 04:00:51.555550 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.22s 2026-02-09 04:00:51.555554 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.03s 2026-02-09 04:00:51.555558 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 2.99s 2026-02-09 04:00:51.555562 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.98s 2026-02-09 04:00:51.555565 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.88s 2026-02-09 04:00:51.555569 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 2.71s 2026-02-09 04:00:51.555577 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.71s 2026-02-09 04:00:51.555581 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.70s 2026-02-09 04:00:51.555585 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.70s 2026-02-09 04:00:54.145897 | orchestrator | 2026-02-09 04:00:54 | INFO  | Task 701586b6-94e5-4eea-94a3-44a3ae5051f1 (nova) was prepared for execution. 2026-02-09 04:00:54.145981 | orchestrator | 2026-02-09 04:00:54 | INFO  | It takes a moment until task 701586b6-94e5-4eea-94a3-44a3ae5051f1 (nova) has been started and output is visible here. 2026-02-09 04:02:43.098333 | orchestrator | 2026-02-09 04:02:43.098437 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:02:43.098451 | orchestrator | 2026-02-09 04:02:43.098470 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-09 04:02:43.098494 | orchestrator | Monday 09 February 2026 04:00:58 +0000 (0:00:00.286) 0:00:00.286 ******* 2026-02-09 04:02:43.098499 | orchestrator | changed: [testbed-manager] 2026-02-09 04:02:43.098505 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:02:43.098509 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:02:43.098513 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:02:43.098517 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:02:43.098522 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:02:43.098525 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:02:43.098529 | orchestrator | 2026-02-09 04:02:43.098533 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:02:43.098537 | orchestrator | Monday 09 February 2026 04:00:59 +0000 (0:00:00.887) 0:00:01.174 ******* 2026-02-09 04:02:43.098541 | orchestrator | changed: [testbed-manager] 2026-02-09 04:02:43.098545 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:02:43.098549 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:02:43.098553 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:02:43.098557 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:02:43.098560 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:02:43.098564 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:02:43.098568 | orchestrator | 2026-02-09 04:02:43.098573 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:02:43.098579 | orchestrator | Monday 09 February 2026 04:01:00 +0000 (0:00:00.872) 0:00:02.047 ******* 2026-02-09 04:02:43.098585 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-09 04:02:43.098591 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-09 04:02:43.098597 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-09 04:02:43.098603 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-09 04:02:43.098608 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-09 04:02:43.098614 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-09 04:02:43.098620 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-09 04:02:43.098625 | orchestrator | 2026-02-09 04:02:43.098631 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-09 04:02:43.098637 | orchestrator | 2026-02-09 04:02:43.098643 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-09 04:02:43.098648 | orchestrator | Monday 09 February 2026 04:01:01 +0000 (0:00:00.750) 0:00:02.797 ******* 2026-02-09 04:02:43.098654 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:02:43.098660 | orchestrator | 2026-02-09 04:02:43.098665 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-09 04:02:43.098671 | orchestrator | Monday 09 February 2026 04:01:01 +0000 (0:00:00.770) 0:00:03.568 ******* 2026-02-09 04:02:43.098678 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-09 04:02:43.098685 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-09 04:02:43.098709 | orchestrator | 2026-02-09 04:02:43.098715 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-09 04:02:43.098721 | orchestrator | Monday 09 February 2026 04:01:05 +0000 (0:00:03.776) 0:00:07.345 ******* 2026-02-09 04:02:43.098728 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-09 04:02:43.098733 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-09 04:02:43.098739 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:02:43.098744 | orchestrator | 2026-02-09 04:02:43.098750 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-09 04:02:43.098756 | orchestrator | Monday 09 February 2026 04:01:09 +0000 (0:00:03.835) 0:00:11.180 ******* 2026-02-09 04:02:43.098762 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:02:43.098768 | orchestrator | 2026-02-09 04:02:43.098775 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-09 04:02:43.098782 | orchestrator | Monday 09 February 2026 04:01:10 +0000 (0:00:00.621) 0:00:11.802 ******* 2026-02-09 04:02:43.098788 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:02:43.098791 | orchestrator | 2026-02-09 04:02:43.098795 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-09 04:02:43.098799 | orchestrator | Monday 09 February 2026 04:01:11 +0000 (0:00:01.295) 0:00:13.097 ******* 2026-02-09 04:02:43.098803 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:02:43.098807 | orchestrator | 2026-02-09 04:02:43.098810 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-09 04:02:43.098814 | orchestrator | Monday 09 February 2026 04:01:14 +0000 (0:00:02.677) 0:00:15.775 ******* 2026-02-09 04:02:43.098818 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:02:43.098822 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:02:43.098825 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:02:43.098829 | orchestrator | 2026-02-09 04:02:43.098833 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-09 04:02:43.098837 | orchestrator | Monday 09 February 2026 04:01:14 +0000 (0:00:00.319) 0:00:16.094 ******* 2026-02-09 04:02:43.098840 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:02:43.098844 | orchestrator | 2026-02-09 04:02:43.098848 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-09 04:02:43.098852 | orchestrator | Monday 09 February 2026 04:01:43 +0000 (0:00:29.435) 0:00:45.530 ******* 2026-02-09 04:02:43.098855 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:02:43.098859 | orchestrator | 2026-02-09 04:02:43.098863 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-09 04:02:43.098867 | orchestrator | Monday 09 February 2026 04:01:57 +0000 (0:00:13.185) 0:00:58.716 ******* 2026-02-09 04:02:43.098871 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:02:43.098874 | orchestrator | 2026-02-09 04:02:43.098878 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-09 04:02:43.098882 | orchestrator | Monday 09 February 2026 04:02:08 +0000 (0:00:11.281) 0:01:09.997 ******* 2026-02-09 04:02:43.098900 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:02:43.098905 | orchestrator | 2026-02-09 04:02:43.098910 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-09 04:02:43.098914 | orchestrator | Monday 09 February 2026 04:02:09 +0000 (0:00:00.663) 0:01:10.661 ******* 2026-02-09 04:02:43.098923 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:02:43.098928 | orchestrator | 2026-02-09 04:02:43.098933 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-09 04:02:43.098937 | orchestrator | Monday 09 February 2026 04:02:09 +0000 (0:00:00.487) 0:01:11.148 ******* 2026-02-09 04:02:43.098943 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:02:43.098948 | orchestrator | 2026-02-09 04:02:43.098952 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-09 04:02:43.098958 | orchestrator | Monday 09 February 2026 04:02:10 +0000 (0:00:00.702) 0:01:11.851 ******* 2026-02-09 04:02:43.098962 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:02:43.098972 | orchestrator | 2026-02-09 04:02:43.098976 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-09 04:02:43.098980 | orchestrator | Monday 09 February 2026 04:02:26 +0000 (0:00:16.241) 0:01:28.092 ******* 2026-02-09 04:02:43.098985 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:02:43.098989 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:02:43.098994 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:02:43.098998 | orchestrator | 2026-02-09 04:02:43.099003 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-09 04:02:43.099008 | orchestrator | 2026-02-09 04:02:43.099012 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-09 04:02:43.099016 | orchestrator | Monday 09 February 2026 04:02:26 +0000 (0:00:00.333) 0:01:28.425 ******* 2026-02-09 04:02:43.099021 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:02:43.099026 | orchestrator | 2026-02-09 04:02:43.099031 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-09 04:02:43.099035 | orchestrator | Monday 09 February 2026 04:02:27 +0000 (0:00:00.794) 0:01:29.219 ******* 2026-02-09 04:02:43.099040 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:02:43.099060 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:02:43.099066 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:02:43.099070 | orchestrator | 2026-02-09 04:02:43.099076 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-09 04:02:43.099080 | orchestrator | Monday 09 February 2026 04:02:29 +0000 (0:00:01.708) 0:01:30.928 ******* 2026-02-09 04:02:43.099085 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:02:43.099089 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:02:43.099094 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:02:43.099099 | orchestrator | 2026-02-09 04:02:43.099103 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-09 04:02:43.099108 | orchestrator | Monday 09 February 2026 04:02:31 +0000 (0:00:01.803) 0:01:32.732 ******* 2026-02-09 04:02:43.099112 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:02:43.099117 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:02:43.099122 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:02:43.099126 | orchestrator | 2026-02-09 04:02:43.099130 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-09 04:02:43.099135 | orchestrator | Monday 09 February 2026 04:02:31 +0000 (0:00:00.549) 0:01:33.281 ******* 2026-02-09 04:02:43.099140 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-09 04:02:43.099145 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:02:43.099149 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-09 04:02:43.099153 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:02:43.099158 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-09 04:02:43.099163 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-09 04:02:43.099168 | orchestrator | 2026-02-09 04:02:43.099172 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-09 04:02:43.099177 | orchestrator | Monday 09 February 2026 04:02:37 +0000 (0:00:06.019) 0:01:39.300 ******* 2026-02-09 04:02:43.099181 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:02:43.099186 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:02:43.099190 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:02:43.099195 | orchestrator | 2026-02-09 04:02:43.099199 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-09 04:02:43.099204 | orchestrator | Monday 09 February 2026 04:02:38 +0000 (0:00:00.363) 0:01:39.663 ******* 2026-02-09 04:02:43.099208 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-09 04:02:43.099213 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:02:43.099218 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-09 04:02:43.099222 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:02:43.099227 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-09 04:02:43.099236 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:02:43.099241 | orchestrator | 2026-02-09 04:02:43.099246 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-09 04:02:43.099250 | orchestrator | Monday 09 February 2026 04:02:39 +0000 (0:00:01.126) 0:01:40.790 ******* 2026-02-09 04:02:43.099255 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:02:43.099260 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:02:43.099264 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:02:43.099269 | orchestrator | 2026-02-09 04:02:43.099273 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-09 04:02:43.099278 | orchestrator | Monday 09 February 2026 04:02:39 +0000 (0:00:00.485) 0:01:41.276 ******* 2026-02-09 04:02:43.099283 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:02:43.099287 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:02:43.099292 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:02:43.099296 | orchestrator | 2026-02-09 04:02:43.099301 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-09 04:02:43.099306 | orchestrator | Monday 09 February 2026 04:02:40 +0000 (0:00:00.939) 0:01:42.215 ******* 2026-02-09 04:02:43.099310 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:02:43.099314 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:02:43.099321 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:03:55.779386 | orchestrator | 2026-02-09 04:03:55.779485 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-09 04:03:55.779498 | orchestrator | Monday 09 February 2026 04:02:43 +0000 (0:00:02.495) 0:01:44.711 ******* 2026-02-09 04:03:55.779507 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:03:55.779517 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:03:55.779525 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:03:55.779534 | orchestrator | 2026-02-09 04:03:55.779542 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-09 04:03:55.779551 | orchestrator | Monday 09 February 2026 04:03:02 +0000 (0:00:19.809) 0:02:04.520 ******* 2026-02-09 04:03:55.779559 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:03:55.779567 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:03:55.779575 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:03:55.779583 | orchestrator | 2026-02-09 04:03:55.779591 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-09 04:03:55.779600 | orchestrator | Monday 09 February 2026 04:03:14 +0000 (0:00:11.114) 0:02:15.634 ******* 2026-02-09 04:03:55.779608 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:03:55.779616 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:03:55.779624 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:03:55.779632 | orchestrator | 2026-02-09 04:03:55.779640 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-09 04:03:55.779648 | orchestrator | Monday 09 February 2026 04:03:15 +0000 (0:00:01.102) 0:02:16.737 ******* 2026-02-09 04:03:55.779656 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:03:55.779664 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:03:55.779672 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:03:55.779680 | orchestrator | 2026-02-09 04:03:55.779688 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-09 04:03:55.779696 | orchestrator | Monday 09 February 2026 04:03:26 +0000 (0:00:11.656) 0:02:28.393 ******* 2026-02-09 04:03:55.779704 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:03:55.779711 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:03:55.779719 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:03:55.779727 | orchestrator | 2026-02-09 04:03:55.779735 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-09 04:03:55.779743 | orchestrator | Monday 09 February 2026 04:03:27 +0000 (0:00:01.116) 0:02:29.509 ******* 2026-02-09 04:03:55.779751 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:03:55.779759 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:03:55.779785 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:03:55.779793 | orchestrator | 2026-02-09 04:03:55.779801 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-09 04:03:55.779809 | orchestrator | 2026-02-09 04:03:55.779817 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-09 04:03:55.779824 | orchestrator | Monday 09 February 2026 04:03:28 +0000 (0:00:00.322) 0:02:29.832 ******* 2026-02-09 04:03:55.779832 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:03:55.779841 | orchestrator | 2026-02-09 04:03:55.779849 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-09 04:03:55.779857 | orchestrator | Monday 09 February 2026 04:03:29 +0000 (0:00:00.802) 0:02:30.634 ******* 2026-02-09 04:03:55.779887 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-09 04:03:55.779895 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-09 04:03:55.779903 | orchestrator | 2026-02-09 04:03:55.779911 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-09 04:03:55.779919 | orchestrator | Monday 09 February 2026 04:03:32 +0000 (0:00:03.073) 0:02:33.708 ******* 2026-02-09 04:03:55.779927 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-09 04:03:55.779974 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-09 04:03:55.779986 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-09 04:03:55.779996 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-09 04:03:55.780005 | orchestrator | 2026-02-09 04:03:55.780015 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-09 04:03:55.780024 | orchestrator | Monday 09 February 2026 04:03:38 +0000 (0:00:06.024) 0:02:39.733 ******* 2026-02-09 04:03:55.780034 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 04:03:55.780044 | orchestrator | 2026-02-09 04:03:55.780053 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-09 04:03:55.780063 | orchestrator | Monday 09 February 2026 04:03:41 +0000 (0:00:02.961) 0:02:42.694 ******* 2026-02-09 04:03:55.780073 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:03:55.780082 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-09 04:03:55.780091 | orchestrator | 2026-02-09 04:03:55.780101 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-09 04:03:55.780110 | orchestrator | Monday 09 February 2026 04:03:44 +0000 (0:00:03.545) 0:02:46.239 ******* 2026-02-09 04:03:55.780119 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 04:03:55.780129 | orchestrator | 2026-02-09 04:03:55.780138 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-09 04:03:55.780148 | orchestrator | Monday 09 February 2026 04:03:47 +0000 (0:00:02.939) 0:02:49.179 ******* 2026-02-09 04:03:55.780157 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-09 04:03:55.780166 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-09 04:03:55.780176 | orchestrator | 2026-02-09 04:03:55.780185 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-09 04:03:55.780209 | orchestrator | Monday 09 February 2026 04:03:54 +0000 (0:00:06.892) 0:02:56.072 ******* 2026-02-09 04:03:55.780228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:03:55.780253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:03:55.780265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:03:55.780287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:00.509053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:00.509141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:00.509148 | orchestrator | 2026-02-09 04:04:00.509153 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-09 04:04:00.509159 | orchestrator | Monday 09 February 2026 04:03:55 +0000 (0:00:01.320) 0:02:57.392 ******* 2026-02-09 04:04:00.509163 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:00.509167 | orchestrator | 2026-02-09 04:04:00.509172 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-09 04:04:00.509177 | orchestrator | Monday 09 February 2026 04:03:55 +0000 (0:00:00.139) 0:02:57.532 ******* 2026-02-09 04:04:00.509183 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:00.509190 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:04:00.509196 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:04:00.509203 | orchestrator | 2026-02-09 04:04:00.509209 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-09 04:04:00.509215 | orchestrator | Monday 09 February 2026 04:03:56 +0000 (0:00:00.338) 0:02:57.870 ******* 2026-02-09 04:04:00.509221 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:04:00.509227 | orchestrator | 2026-02-09 04:04:00.509234 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-09 04:04:00.509241 | orchestrator | Monday 09 February 2026 04:03:56 +0000 (0:00:00.740) 0:02:58.611 ******* 2026-02-09 04:04:00.509248 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:00.509254 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:04:00.509261 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:04:00.509265 | orchestrator | 2026-02-09 04:04:00.509269 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-09 04:04:00.509273 | orchestrator | Monday 09 February 2026 04:03:57 +0000 (0:00:00.527) 0:02:59.138 ******* 2026-02-09 04:04:00.509277 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:04:00.509282 | orchestrator | 2026-02-09 04:04:00.509286 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-09 04:04:00.509290 | orchestrator | Monday 09 February 2026 04:03:58 +0000 (0:00:00.601) 0:02:59.740 ******* 2026-02-09 04:04:00.509297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:00.509328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:00.509334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:00.509338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:00.509343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:00.509357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:00.509364 | orchestrator | 2026-02-09 04:04:00.509375 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-09 04:04:02.186610 | orchestrator | Monday 09 February 2026 04:04:00 +0000 (0:00:02.382) 0:03:02.122 ******* 2026-02-09 04:04:02.186710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 04:04:02.186728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:04:02.186739 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:02.186749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 04:04:02.186781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:04:02.186791 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:04:02.186828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 04:04:02.186841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:04:02.186908 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:04:02.186922 | orchestrator | 2026-02-09 04:04:02.186936 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-09 04:04:02.186950 | orchestrator | Monday 09 February 2026 04:04:01 +0000 (0:00:00.861) 0:03:02.984 ******* 2026-02-09 04:04:02.186963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 04:04:02.186991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:04:02.187005 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:02.187040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 04:04:04.504767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:04:04.504920 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:04:04.504942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 04:04:04.504991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:04:04.505012 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:04:04.505032 | orchestrator | 2026-02-09 04:04:04.505051 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-09 04:04:04.505091 | orchestrator | Monday 09 February 2026 04:04:02 +0000 (0:00:00.819) 0:03:03.804 ******* 2026-02-09 04:04:04.505130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:04.505180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:04.505203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:04.505236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:04.505255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:04.505276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:11.150520 | orchestrator | 2026-02-09 04:04:11.150629 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-09 04:04:11.150647 | orchestrator | Monday 09 February 2026 04:04:04 +0000 (0:00:02.314) 0:03:06.118 ******* 2026-02-09 04:04:11.150664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:11.150703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:11.150732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:11.150764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:11.150779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:11.150791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:11.150811 | orchestrator | 2026-02-09 04:04:11.150823 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-09 04:04:11.150929 | orchestrator | Monday 09 February 2026 04:04:10 +0000 (0:00:05.927) 0:03:12.046 ******* 2026-02-09 04:04:11.150942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 04:04:11.151021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:04:11.151035 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:11.151063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 04:04:15.380035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:04:15.380155 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:04:15.380170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-09 04:04:15.380193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:04:15.380201 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:04:15.380209 | orchestrator | 2026-02-09 04:04:15.380218 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-09 04:04:15.380226 | orchestrator | Monday 09 February 2026 04:04:11 +0000 (0:00:00.726) 0:03:12.773 ******* 2026-02-09 04:04:15.380233 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:04:15.380240 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:04:15.380248 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:04:15.380255 | orchestrator | 2026-02-09 04:04:15.380262 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-09 04:04:15.380269 | orchestrator | Monday 09 February 2026 04:04:12 +0000 (0:00:01.483) 0:03:14.257 ******* 2026-02-09 04:04:15.380277 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:15.380284 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:04:15.380291 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:04:15.380298 | orchestrator | 2026-02-09 04:04:15.380305 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-09 04:04:15.380312 | orchestrator | Monday 09 February 2026 04:04:12 +0000 (0:00:00.344) 0:03:14.601 ******* 2026-02-09 04:04:15.380336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:15.380351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:15.380364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-09 04:04:15.380373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:15.380386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:15.380400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:57.672348 | orchestrator | 2026-02-09 04:04:57.672443 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-09 04:04:57.672451 | orchestrator | Monday 09 February 2026 04:04:14 +0000 (0:00:01.930) 0:03:16.532 ******* 2026-02-09 04:04:57.672455 | orchestrator | 2026-02-09 04:04:57.672459 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-09 04:04:57.672463 | orchestrator | Monday 09 February 2026 04:04:15 +0000 (0:00:00.156) 0:03:16.688 ******* 2026-02-09 04:04:57.672467 | orchestrator | 2026-02-09 04:04:57.672472 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-09 04:04:57.672475 | orchestrator | Monday 09 February 2026 04:04:15 +0000 (0:00:00.152) 0:03:16.840 ******* 2026-02-09 04:04:57.672479 | orchestrator | 2026-02-09 04:04:57.672483 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-09 04:04:57.672487 | orchestrator | Monday 09 February 2026 04:04:15 +0000 (0:00:00.152) 0:03:16.993 ******* 2026-02-09 04:04:57.672491 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:04:57.672495 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:04:57.672499 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:04:57.672503 | orchestrator | 2026-02-09 04:04:57.672507 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-09 04:04:57.672511 | orchestrator | Monday 09 February 2026 04:04:37 +0000 (0:00:22.361) 0:03:39.354 ******* 2026-02-09 04:04:57.672514 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:04:57.672518 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:04:57.672522 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:04:57.672526 | orchestrator | 2026-02-09 04:04:57.672529 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-09 04:04:57.672533 | orchestrator | 2026-02-09 04:04:57.672537 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-09 04:04:57.672541 | orchestrator | Monday 09 February 2026 04:04:45 +0000 (0:00:08.185) 0:03:47.540 ******* 2026-02-09 04:04:57.672546 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:04:57.672551 | orchestrator | 2026-02-09 04:04:57.672555 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-09 04:04:57.672558 | orchestrator | Monday 09 February 2026 04:04:47 +0000 (0:00:01.284) 0:03:48.824 ******* 2026-02-09 04:04:57.672562 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:04:57.672566 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:04:57.672580 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:04:57.672584 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:57.672589 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:04:57.672606 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:04:57.672610 | orchestrator | 2026-02-09 04:04:57.672613 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-09 04:04:57.672617 | orchestrator | Monday 09 February 2026 04:04:48 +0000 (0:00:00.845) 0:03:49.670 ******* 2026-02-09 04:04:57.672621 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:57.672625 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:04:57.672628 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:04:57.672632 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 04:04:57.672636 | orchestrator | 2026-02-09 04:04:57.672640 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-09 04:04:57.672644 | orchestrator | Monday 09 February 2026 04:04:48 +0000 (0:00:00.928) 0:03:50.598 ******* 2026-02-09 04:04:57.672648 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-09 04:04:57.672653 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-09 04:04:57.672656 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-09 04:04:57.672660 | orchestrator | 2026-02-09 04:04:57.672664 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-09 04:04:57.672668 | orchestrator | Monday 09 February 2026 04:04:49 +0000 (0:00:00.927) 0:03:51.526 ******* 2026-02-09 04:04:57.672671 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-09 04:04:57.672675 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-09 04:04:57.672679 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-09 04:04:57.672683 | orchestrator | 2026-02-09 04:04:57.672687 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-09 04:04:57.672690 | orchestrator | Monday 09 February 2026 04:04:51 +0000 (0:00:01.231) 0:03:52.757 ******* 2026-02-09 04:04:57.672694 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-09 04:04:57.672698 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:04:57.672701 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-09 04:04:57.672705 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:04:57.672709 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-09 04:04:57.672712 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:04:57.672716 | orchestrator | 2026-02-09 04:04:57.672769 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-09 04:04:57.672775 | orchestrator | Monday 09 February 2026 04:04:51 +0000 (0:00:00.600) 0:03:53.358 ******* 2026-02-09 04:04:57.672779 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-09 04:04:57.672782 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-09 04:04:57.672786 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 04:04:57.672790 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 04:04:57.672793 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:57.672797 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 04:04:57.672801 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 04:04:57.672815 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-09 04:04:57.672819 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-09 04:04:57.672823 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:04:57.672827 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-09 04:04:57.672830 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 04:04:57.672834 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 04:04:57.672838 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:04:57.672846 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-09 04:04:57.672849 | orchestrator | 2026-02-09 04:04:57.672853 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-09 04:04:57.672857 | orchestrator | Monday 09 February 2026 04:04:52 +0000 (0:00:01.197) 0:03:54.555 ******* 2026-02-09 04:04:57.672860 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:57.672864 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:04:57.672868 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:04:57.672872 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:04:57.672875 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:04:57.672879 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:04:57.672883 | orchestrator | 2026-02-09 04:04:57.672887 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-09 04:04:57.672890 | orchestrator | Monday 09 February 2026 04:04:54 +0000 (0:00:01.096) 0:03:55.652 ******* 2026-02-09 04:04:57.672894 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:04:57.672898 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:04:57.672901 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:04:57.672905 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:04:57.672910 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:04:57.672914 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:04:57.672918 | orchestrator | 2026-02-09 04:04:57.672922 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-09 04:04:57.672926 | orchestrator | Monday 09 February 2026 04:04:55 +0000 (0:00:01.698) 0:03:57.350 ******* 2026-02-09 04:04:57.672937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:04:57.672946 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:04:57.672953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:04:59.622894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623096 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623244 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:04:59.623325 | orchestrator | 2026-02-09 04:04:59.623338 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-09 04:04:59.623349 | orchestrator | Monday 09 February 2026 04:04:58 +0000 (0:00:02.485) 0:03:59.835 ******* 2026-02-09 04:04:59.623360 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:04:59.623371 | orchestrator | 2026-02-09 04:04:59.623381 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-09 04:04:59.623398 | orchestrator | Monday 09 February 2026 04:04:59 +0000 (0:00:01.401) 0:04:01.236 ******* 2026-02-09 04:05:02.900995 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901353 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:02.901512 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:05.081383 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:05.081517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:05.081542 | orchestrator | 2026-02-09 04:05:05.081564 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-09 04:05:05.081586 | orchestrator | Monday 09 February 2026 04:05:03 +0000 (0:00:03.701) 0:04:04.938 ******* 2026-02-09 04:05:05.081604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:05:05.081640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:05:05.081653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-09 04:05:05.081665 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:05:05.081695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:05:05.081749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:05:05.081763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-09 04:05:05.081784 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:05:05.081795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:05:05.081807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:05:05.081829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-09 04:05:06.579800 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:05:06.579897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-09 04:05:06.579911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:05:06.579934 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:05:06.579941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-09 04:05:06.579948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:05:06.579955 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:05:06.579962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-09 04:05:06.579968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:05:06.579975 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:05:06.579981 | orchestrator | 2026-02-09 04:05:06.579988 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-09 04:05:06.579996 | orchestrator | Monday 09 February 2026 04:05:05 +0000 (0:00:01.855) 0:04:06.793 ******* 2026-02-09 04:05:06.580019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:05:06.580033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:05:06.580040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-09 04:05:06.580048 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:05:06.580055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:05:06.580062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:05:06.580074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-09 04:05:14.416939 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:05:14.417037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:05:14.417063 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:05:14.417070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-09 04:05:14.417076 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:05:14.417083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-09 04:05:14.417089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:05:14.417094 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:05:14.417114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-09 04:05:14.417125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:05:14.417131 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:05:14.417136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-09 04:05:14.417142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:05:14.417147 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:05:14.417152 | orchestrator | 2026-02-09 04:05:14.417159 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-09 04:05:14.417165 | orchestrator | Monday 09 February 2026 04:05:07 +0000 (0:00:02.371) 0:04:09.164 ******* 2026-02-09 04:05:14.417170 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:05:14.417176 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:05:14.417181 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:05:14.417186 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 04:05:14.417192 | orchestrator | 2026-02-09 04:05:14.417197 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-09 04:05:14.417202 | orchestrator | Monday 09 February 2026 04:05:08 +0000 (0:00:01.136) 0:04:10.300 ******* 2026-02-09 04:05:14.417207 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-09 04:05:14.417212 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-09 04:05:14.417218 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-09 04:05:14.417223 | orchestrator | 2026-02-09 04:05:14.417228 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-09 04:05:14.417233 | orchestrator | Monday 09 February 2026 04:05:09 +0000 (0:00:01.104) 0:04:11.404 ******* 2026-02-09 04:05:14.417239 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-09 04:05:14.417244 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-09 04:05:14.417249 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-09 04:05:14.417254 | orchestrator | 2026-02-09 04:05:14.417259 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-09 04:05:14.417264 | orchestrator | Monday 09 February 2026 04:05:10 +0000 (0:00:00.957) 0:04:12.362 ******* 2026-02-09 04:05:14.417269 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:05:14.417275 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:05:14.417284 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:05:14.417289 | orchestrator | 2026-02-09 04:05:14.417294 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-09 04:05:14.417299 | orchestrator | Monday 09 February 2026 04:05:11 +0000 (0:00:00.549) 0:04:12.912 ******* 2026-02-09 04:05:14.417304 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:05:14.417309 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:05:14.417314 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:05:14.417319 | orchestrator | 2026-02-09 04:05:14.417324 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-09 04:05:14.417329 | orchestrator | Monday 09 February 2026 04:05:11 +0000 (0:00:00.537) 0:04:13.449 ******* 2026-02-09 04:05:14.417334 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-09 04:05:14.417340 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-09 04:05:14.417345 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-09 04:05:14.417350 | orchestrator | 2026-02-09 04:05:14.417356 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-09 04:05:14.417361 | orchestrator | Monday 09 February 2026 04:05:13 +0000 (0:00:01.412) 0:04:14.861 ******* 2026-02-09 04:05:14.417370 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-09 04:05:33.105395 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-09 04:05:33.105506 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-09 04:05:33.105517 | orchestrator | 2026-02-09 04:05:33.105526 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-09 04:05:33.105535 | orchestrator | Monday 09 February 2026 04:05:14 +0000 (0:00:01.173) 0:04:16.035 ******* 2026-02-09 04:05:33.105543 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-09 04:05:33.105550 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-09 04:05:33.105558 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-09 04:05:33.105565 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-09 04:05:33.105572 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-09 04:05:33.105580 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-09 04:05:33.105587 | orchestrator | 2026-02-09 04:05:33.105594 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-09 04:05:33.105602 | orchestrator | Monday 09 February 2026 04:05:18 +0000 (0:00:03.904) 0:04:19.939 ******* 2026-02-09 04:05:33.105609 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:05:33.105618 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:05:33.105625 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:05:33.105632 | orchestrator | 2026-02-09 04:05:33.105639 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-09 04:05:33.105699 | orchestrator | Monday 09 February 2026 04:05:18 +0000 (0:00:00.360) 0:04:20.300 ******* 2026-02-09 04:05:33.105708 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:05:33.105716 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:05:33.105723 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:05:33.105730 | orchestrator | 2026-02-09 04:05:33.105738 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-09 04:05:33.105745 | orchestrator | Monday 09 February 2026 04:05:19 +0000 (0:00:00.622) 0:04:20.922 ******* 2026-02-09 04:05:33.105753 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:05:33.105793 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:05:33.105802 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:05:33.105809 | orchestrator | 2026-02-09 04:05:33.105817 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-09 04:05:33.105824 | orchestrator | Monday 09 February 2026 04:05:20 +0000 (0:00:01.314) 0:04:22.236 ******* 2026-02-09 04:05:33.105832 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-09 04:05:33.105861 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-09 04:05:33.105869 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-09 04:05:33.105877 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-09 04:05:33.105884 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-09 04:05:33.105892 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-09 04:05:33.105899 | orchestrator | 2026-02-09 04:05:33.105907 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-09 04:05:33.105914 | orchestrator | Monday 09 February 2026 04:05:24 +0000 (0:00:03.456) 0:04:25.692 ******* 2026-02-09 04:05:33.105921 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-09 04:05:33.105929 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-09 04:05:33.105936 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-09 04:05:33.105943 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-09 04:05:33.105952 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:05:33.105961 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-09 04:05:33.105970 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:05:33.105978 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-09 04:05:33.105986 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:05:33.105994 | orchestrator | 2026-02-09 04:05:33.106003 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-09 04:05:33.106058 | orchestrator | Monday 09 February 2026 04:05:27 +0000 (0:00:03.420) 0:04:29.113 ******* 2026-02-09 04:05:33.106067 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:05:33.106076 | orchestrator | 2026-02-09 04:05:33.106084 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-09 04:05:33.106093 | orchestrator | Monday 09 February 2026 04:05:27 +0000 (0:00:00.143) 0:04:29.256 ******* 2026-02-09 04:05:33.106103 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:05:33.106115 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:05:33.106128 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:05:33.106140 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:05:33.106153 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:05:33.106164 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:05:33.106177 | orchestrator | 2026-02-09 04:05:33.106188 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-09 04:05:33.106199 | orchestrator | Monday 09 February 2026 04:05:28 +0000 (0:00:00.850) 0:04:30.106 ******* 2026-02-09 04:05:33.106212 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-09 04:05:33.106225 | orchestrator | 2026-02-09 04:05:33.106237 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-09 04:05:33.106249 | orchestrator | Monday 09 February 2026 04:05:29 +0000 (0:00:00.715) 0:04:30.822 ******* 2026-02-09 04:05:33.106261 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:05:33.106293 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:05:33.106306 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:05:33.106326 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:05:33.106339 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:05:33.106350 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:05:33.106361 | orchestrator | 2026-02-09 04:05:33.106374 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-09 04:05:33.106386 | orchestrator | Monday 09 February 2026 04:05:30 +0000 (0:00:00.844) 0:04:31.667 ******* 2026-02-09 04:05:33.106402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:05:33.106432 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:05:33.106443 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:05:33.106456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:05:33.106486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:05:38.433239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:05:38.433360 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:05:38.433376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:05:38.433386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:05:38.433395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:38.433404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:38.433443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:38.433461 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:38.433472 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:38.433481 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:38.433491 | orchestrator | 2026-02-09 04:05:38.433502 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-09 04:05:38.433512 | orchestrator | Monday 09 February 2026 04:05:33 +0000 (0:00:03.256) 0:04:34.923 ******* 2026-02-09 04:05:38.433522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:05:38.433537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:05:38.433559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:05:40.672573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:05:40.672727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:05:40.672745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:05:40.672757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:40.672812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:40.672845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:05:40.672859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:40.672870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:05:40.672882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:05:40.672894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:40.672919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:40.672932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:05:40.672944 | orchestrator | 2026-02-09 04:05:40.672956 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-09 04:05:40.672976 | orchestrator | Monday 09 February 2026 04:05:40 +0000 (0:00:07.360) 0:04:42.284 ******* 2026-02-09 04:06:02.566824 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:06:02.566948 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:06:02.566966 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:06:02.566978 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:06:02.566991 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:06:02.567003 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:06:02.567015 | orchestrator | 2026-02-09 04:06:02.567028 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-09 04:06:02.567041 | orchestrator | Monday 09 February 2026 04:05:42 +0000 (0:00:01.546) 0:04:43.830 ******* 2026-02-09 04:06:02.567053 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-09 04:06:02.567066 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-09 04:06:02.567077 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-09 04:06:02.567089 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-09 04:06:02.567097 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-09 04:06:02.567104 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-09 04:06:02.567112 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-09 04:06:02.567118 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:06:02.567125 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-09 04:06:02.567132 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:06:02.567139 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-09 04:06:02.567145 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:06:02.567152 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-09 04:06:02.567159 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-09 04:06:02.567166 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-09 04:06:02.567173 | orchestrator | 2026-02-09 04:06:02.567199 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-09 04:06:02.567207 | orchestrator | Monday 09 February 2026 04:05:45 +0000 (0:00:03.794) 0:04:47.625 ******* 2026-02-09 04:06:02.567214 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:06:02.567220 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:06:02.567227 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:06:02.567234 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:06:02.567240 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:06:02.567247 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:06:02.567253 | orchestrator | 2026-02-09 04:06:02.567260 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-09 04:06:02.567267 | orchestrator | Monday 09 February 2026 04:05:46 +0000 (0:00:00.653) 0:04:48.278 ******* 2026-02-09 04:06:02.567273 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-09 04:06:02.567281 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-09 04:06:02.567287 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-09 04:06:02.567294 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-09 04:06:02.567301 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-09 04:06:02.567308 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-09 04:06:02.567314 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-09 04:06:02.567334 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-09 04:06:02.567340 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-09 04:06:02.567347 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-09 04:06:02.567356 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:06:02.567364 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-09 04:06:02.567373 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:06:02.567381 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-09 04:06:02.567389 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:06:02.567397 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-09 04:06:02.567405 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-09 04:06:02.567429 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-09 04:06:02.567438 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-09 04:06:02.567446 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-09 04:06:02.567453 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-09 04:06:02.567459 | orchestrator | 2026-02-09 04:06:02.567466 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-09 04:06:02.567475 | orchestrator | Monday 09 February 2026 04:05:51 +0000 (0:00:05.177) 0:04:53.455 ******* 2026-02-09 04:06:02.567487 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-09 04:06:02.567506 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-09 04:06:02.567517 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-09 04:06:02.567528 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-09 04:06:02.567538 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-09 04:06:02.567549 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-09 04:06:02.567559 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-09 04:06:02.567570 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-09 04:06:02.567581 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-09 04:06:02.567617 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-09 04:06:02.567629 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-09 04:06:02.567641 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-09 04:06:02.567648 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-09 04:06:02.567655 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:06:02.567661 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-09 04:06:02.567668 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:06:02.567675 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-09 04:06:02.567681 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:06:02.567689 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-09 04:06:02.567695 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-09 04:06:02.567702 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-09 04:06:02.567709 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-09 04:06:02.567715 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-09 04:06:02.567722 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-09 04:06:02.567728 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-09 04:06:02.567735 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-09 04:06:02.567741 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-09 04:06:02.567748 | orchestrator | 2026-02-09 04:06:02.567755 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-09 04:06:02.567761 | orchestrator | Monday 09 February 2026 04:05:58 +0000 (0:00:07.117) 0:05:00.573 ******* 2026-02-09 04:06:02.567774 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:06:02.567781 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:06:02.567787 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:06:02.567794 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:06:02.567800 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:06:02.567807 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:06:02.567813 | orchestrator | 2026-02-09 04:06:02.567820 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-09 04:06:02.567826 | orchestrator | Monday 09 February 2026 04:05:59 +0000 (0:00:00.946) 0:05:01.520 ******* 2026-02-09 04:06:02.567833 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:06:02.567839 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:06:02.567846 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:06:02.567858 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:06:02.567865 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:06:02.567871 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:06:02.567878 | orchestrator | 2026-02-09 04:06:02.567884 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-09 04:06:02.567891 | orchestrator | Monday 09 February 2026 04:06:00 +0000 (0:00:00.648) 0:05:02.168 ******* 2026-02-09 04:06:02.567897 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:06:02.567904 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:06:02.567910 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:06:02.567917 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:06:02.567924 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:06:02.567930 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:06:02.567937 | orchestrator | 2026-02-09 04:06:02.567951 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-09 04:06:03.827849 | orchestrator | Monday 09 February 2026 04:06:02 +0000 (0:00:02.003) 0:05:04.172 ******* 2026-02-09 04:06:03.827932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:06:03.827946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:06:03.827957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-09 04:06:03.827966 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:06:03.827989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:06:03.828014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:06:03.828036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-09 04:06:03.828045 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:06:03.828053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-09 04:06:03.828061 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-09 04:06:03.828068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-09 04:06:03.828081 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:06:03.828094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-09 04:06:03.828108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:06:07.411072 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:06:07.411187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-09 04:06:07.411210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:06:07.411222 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:06:07.411235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-09 04:06:07.411247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:06:07.411281 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:06:07.411289 | orchestrator | 2026-02-09 04:06:07.411297 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-09 04:06:07.411304 | orchestrator | Monday 09 February 2026 04:06:03 +0000 (0:00:01.423) 0:05:05.595 ******* 2026-02-09 04:06:07.411312 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-09 04:06:07.411319 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-09 04:06:07.411326 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:06:07.411333 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-09 04:06:07.411351 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-09 04:06:07.411358 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:06:07.411365 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-09 04:06:07.411372 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-09 04:06:07.411378 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:06:07.411385 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-09 04:06:07.411391 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-09 04:06:07.411398 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:06:07.411404 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-09 04:06:07.411411 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-09 04:06:07.411418 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:06:07.411424 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-09 04:06:07.411431 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-09 04:06:07.411438 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:06:07.411444 | orchestrator | 2026-02-09 04:06:07.411451 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-09 04:06:07.411458 | orchestrator | Monday 09 February 2026 04:06:04 +0000 (0:00:01.006) 0:05:06.601 ******* 2026-02-09 04:06:07.411479 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:06:07.411490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:06:07.411497 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-09 04:06:07.411513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:06:07.411524 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:06:07.411544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:06:51.403844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-09 04:06:51.403955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:06:51.403971 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-09 04:06:51.404003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:06:51.404029 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:06:51.404041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:06:51.404069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:06:51.404081 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:06:51.404092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-09 04:06:51.404111 | orchestrator | 2026-02-09 04:06:51.404123 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-09 04:06:51.404134 | orchestrator | Monday 09 February 2026 04:06:07 +0000 (0:00:02.786) 0:05:09.387 ******* 2026-02-09 04:06:51.404145 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:06:51.404155 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:06:51.404165 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:06:51.404175 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:06:51.404184 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:06:51.404194 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:06:51.404204 | orchestrator | 2026-02-09 04:06:51.404214 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-09 04:06:51.404224 | orchestrator | Monday 09 February 2026 04:06:08 +0000 (0:00:00.860) 0:05:10.248 ******* 2026-02-09 04:06:51.404234 | orchestrator | 2026-02-09 04:06:51.404243 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-09 04:06:51.404257 | orchestrator | Monday 09 February 2026 04:06:08 +0000 (0:00:00.143) 0:05:10.392 ******* 2026-02-09 04:06:51.404274 | orchestrator | 2026-02-09 04:06:51.404291 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-09 04:06:51.404308 | orchestrator | Monday 09 February 2026 04:06:08 +0000 (0:00:00.158) 0:05:10.550 ******* 2026-02-09 04:06:51.404327 | orchestrator | 2026-02-09 04:06:51.404351 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-09 04:06:51.404363 | orchestrator | Monday 09 February 2026 04:06:09 +0000 (0:00:00.141) 0:05:10.692 ******* 2026-02-09 04:06:51.404372 | orchestrator | 2026-02-09 04:06:51.404382 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-09 04:06:51.404392 | orchestrator | Monday 09 February 2026 04:06:09 +0000 (0:00:00.139) 0:05:10.832 ******* 2026-02-09 04:06:51.404403 | orchestrator | 2026-02-09 04:06:51.404413 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-09 04:06:51.404424 | orchestrator | Monday 09 February 2026 04:06:09 +0000 (0:00:00.313) 0:05:11.145 ******* 2026-02-09 04:06:51.404435 | orchestrator | 2026-02-09 04:06:51.404445 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-09 04:06:51.404456 | orchestrator | Monday 09 February 2026 04:06:09 +0000 (0:00:00.141) 0:05:11.286 ******* 2026-02-09 04:06:51.404467 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:06:51.404478 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:06:51.404489 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:06:51.404532 | orchestrator | 2026-02-09 04:06:51.404544 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-09 04:06:51.404555 | orchestrator | Monday 09 February 2026 04:06:16 +0000 (0:00:07.008) 0:05:18.295 ******* 2026-02-09 04:06:51.404566 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:06:51.404577 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:06:51.404588 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:06:51.404599 | orchestrator | 2026-02-09 04:06:51.404610 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-09 04:06:51.404621 | orchestrator | Monday 09 February 2026 04:06:31 +0000 (0:00:14.340) 0:05:32.635 ******* 2026-02-09 04:06:51.404640 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:06:51.404651 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:06:51.404662 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:06:51.404673 | orchestrator | 2026-02-09 04:06:51.404693 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-09 04:09:13.087594 | orchestrator | Monday 09 February 2026 04:06:51 +0000 (0:00:20.376) 0:05:53.012 ******* 2026-02-09 04:09:13.087689 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:09:13.087700 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:09:13.087708 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:09:13.087716 | orchestrator | 2026-02-09 04:09:13.087724 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-09 04:09:13.087732 | orchestrator | Monday 09 February 2026 04:07:33 +0000 (0:00:42.109) 0:06:35.122 ******* 2026-02-09 04:09:13.087739 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:09:13.087747 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:09:13.087754 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:09:13.087761 | orchestrator | 2026-02-09 04:09:13.087769 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-09 04:09:13.087776 | orchestrator | Monday 09 February 2026 04:07:34 +0000 (0:00:00.799) 0:06:35.922 ******* 2026-02-09 04:09:13.087784 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:09:13.087791 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:09:13.087798 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:09:13.087805 | orchestrator | 2026-02-09 04:09:13.087812 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-09 04:09:13.087820 | orchestrator | Monday 09 February 2026 04:07:35 +0000 (0:00:00.800) 0:06:36.723 ******* 2026-02-09 04:09:13.087827 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:09:13.087834 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:09:13.087842 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:09:13.087849 | orchestrator | 2026-02-09 04:09:13.087857 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-09 04:09:13.087865 | orchestrator | Monday 09 February 2026 04:08:05 +0000 (0:00:30.543) 0:07:07.266 ******* 2026-02-09 04:09:13.087872 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:09:13.087879 | orchestrator | 2026-02-09 04:09:13.087887 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-09 04:09:13.087894 | orchestrator | Monday 09 February 2026 04:08:05 +0000 (0:00:00.135) 0:07:07.401 ******* 2026-02-09 04:09:13.087902 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:09:13.087909 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:09:13.087916 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:13.087923 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:13.087931 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:13.087938 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-09 04:09:13.087947 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-09 04:09:13.087955 | orchestrator | 2026-02-09 04:09:13.087962 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-09 04:09:13.087969 | orchestrator | Monday 09 February 2026 04:08:27 +0000 (0:00:21.659) 0:07:29.061 ******* 2026-02-09 04:09:13.087977 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:09:13.087984 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:13.087991 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:09:13.087999 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:09:13.088006 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:13.088013 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:13.088020 | orchestrator | 2026-02-09 04:09:13.088028 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-09 04:09:13.088035 | orchestrator | Monday 09 February 2026 04:08:36 +0000 (0:00:09.447) 0:07:38.508 ******* 2026-02-09 04:09:13.088042 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:09:13.088068 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:09:13.088075 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:13.088082 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:13.088090 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:13.088097 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-02-09 04:09:13.088105 | orchestrator | 2026-02-09 04:09:13.088112 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-09 04:09:13.088132 | orchestrator | Monday 09 February 2026 04:08:42 +0000 (0:00:05.418) 0:07:43.927 ******* 2026-02-09 04:09:13.088208 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-09 04:09:13.088218 | orchestrator | 2026-02-09 04:09:13.088225 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-09 04:09:13.088233 | orchestrator | Monday 09 February 2026 04:08:54 +0000 (0:00:11.889) 0:07:55.816 ******* 2026-02-09 04:09:13.088240 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-09 04:09:13.088247 | orchestrator | 2026-02-09 04:09:13.088254 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-09 04:09:13.088299 | orchestrator | Monday 09 February 2026 04:08:55 +0000 (0:00:01.691) 0:07:57.508 ******* 2026-02-09 04:09:13.088307 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:09:13.088314 | orchestrator | 2026-02-09 04:09:13.088321 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-09 04:09:13.088328 | orchestrator | Monday 09 February 2026 04:08:57 +0000 (0:00:01.852) 0:07:59.361 ******* 2026-02-09 04:09:13.088336 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-09 04:09:13.088343 | orchestrator | 2026-02-09 04:09:13.088350 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-09 04:09:13.088357 | orchestrator | Monday 09 February 2026 04:09:07 +0000 (0:00:09.833) 0:08:09.194 ******* 2026-02-09 04:09:13.088364 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:09:13.088373 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:09:13.088380 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:09:13.088387 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:13.088395 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:13.088402 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:13.088409 | orchestrator | 2026-02-09 04:09:13.088416 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-09 04:09:13.088424 | orchestrator | 2026-02-09 04:09:13.088431 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-09 04:09:13.088453 | orchestrator | Monday 09 February 2026 04:09:09 +0000 (0:00:01.830) 0:08:11.025 ******* 2026-02-09 04:09:13.088461 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:09:13.088468 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:09:13.088475 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:09:13.088483 | orchestrator | 2026-02-09 04:09:13.088490 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-09 04:09:13.088497 | orchestrator | 2026-02-09 04:09:13.088504 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-09 04:09:13.088512 | orchestrator | Monday 09 February 2026 04:09:10 +0000 (0:00:00.891) 0:08:11.917 ******* 2026-02-09 04:09:13.088519 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:13.088526 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:13.088533 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:13.088540 | orchestrator | 2026-02-09 04:09:13.088548 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-09 04:09:13.088555 | orchestrator | 2026-02-09 04:09:13.088562 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-09 04:09:13.088569 | orchestrator | Monday 09 February 2026 04:09:11 +0000 (0:00:00.740) 0:08:12.657 ******* 2026-02-09 04:09:13.088577 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-09 04:09:13.088584 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-09 04:09:13.088600 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-09 04:09:13.088607 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-09 04:09:13.088615 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-09 04:09:13.088622 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-09 04:09:13.088643 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:09:13.088651 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-09 04:09:13.088659 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-09 04:09:13.088666 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-09 04:09:13.088673 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-09 04:09:13.088681 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-09 04:09:13.088688 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-09 04:09:13.088695 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:09:13.088703 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-09 04:09:13.088710 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-09 04:09:13.088717 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-09 04:09:13.088724 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-09 04:09:13.088732 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-09 04:09:13.088739 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-09 04:09:13.088746 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:09:13.088754 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-09 04:09:13.088761 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-09 04:09:13.088768 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-09 04:09:13.088776 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-09 04:09:13.088783 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-09 04:09:13.088790 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-09 04:09:13.088798 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:13.088805 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-09 04:09:13.088812 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-09 04:09:13.088820 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-09 04:09:13.088827 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-09 04:09:13.088840 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-09 04:09:13.088848 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-09 04:09:13.088855 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:13.088862 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-09 04:09:13.088870 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-09 04:09:13.088877 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-09 04:09:13.088884 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-09 04:09:13.088892 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-09 04:09:13.088899 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-09 04:09:13.088907 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:13.088926 | orchestrator | 2026-02-09 04:09:13.088934 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-09 04:09:13.088941 | orchestrator | 2026-02-09 04:09:13.088958 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-09 04:09:13.088975 | orchestrator | Monday 09 February 2026 04:09:12 +0000 (0:00:01.469) 0:08:14.127 ******* 2026-02-09 04:09:13.088983 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-09 04:09:13.088995 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-09 04:09:13.089003 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:13.089010 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-09 04:09:13.089018 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-09 04:09:13.089025 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:13.089032 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-09 04:09:13.089040 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-09 04:09:13.089047 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:13.089054 | orchestrator | 2026-02-09 04:09:13.089067 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-09 04:09:14.946246 | orchestrator | 2026-02-09 04:09:14.946375 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-09 04:09:14.946385 | orchestrator | Monday 09 February 2026 04:09:13 +0000 (0:00:00.576) 0:08:14.704 ******* 2026-02-09 04:09:14.946392 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:14.946399 | orchestrator | 2026-02-09 04:09:14.946406 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-09 04:09:14.946412 | orchestrator | 2026-02-09 04:09:14.946456 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-09 04:09:14.946465 | orchestrator | Monday 09 February 2026 04:09:14 +0000 (0:00:00.933) 0:08:15.637 ******* 2026-02-09 04:09:14.946471 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:14.946477 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:14.946483 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:14.946489 | orchestrator | 2026-02-09 04:09:14.946496 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:09:14.946502 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:09:14.946511 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-09 04:09:14.946517 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-09 04:09:14.946523 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-09 04:09:14.946528 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-09 04:09:14.946534 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-09 04:09:14.946540 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-09 04:09:14.946545 | orchestrator | 2026-02-09 04:09:14.946551 | orchestrator | 2026-02-09 04:09:14.946557 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:09:14.946563 | orchestrator | Monday 09 February 2026 04:09:14 +0000 (0:00:00.469) 0:08:16.107 ******* 2026-02-09 04:09:14.946568 | orchestrator | =============================================================================== 2026-02-09 04:09:14.946574 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 42.11s 2026-02-09 04:09:14.946580 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 30.54s 2026-02-09 04:09:14.946585 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.44s 2026-02-09 04:09:14.946591 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.36s 2026-02-09 04:09:14.946597 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.66s 2026-02-09 04:09:14.946621 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.38s 2026-02-09 04:09:14.946627 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.81s 2026-02-09 04:09:14.946633 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.24s 2026-02-09 04:09:14.946650 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.34s 2026-02-09 04:09:14.946656 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.19s 2026-02-09 04:09:14.946661 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.89s 2026-02-09 04:09:14.946667 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.66s 2026-02-09 04:09:14.946673 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.28s 2026-02-09 04:09:14.946678 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.11s 2026-02-09 04:09:14.946687 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 9.83s 2026-02-09 04:09:14.946697 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.45s 2026-02-09 04:09:14.946706 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.19s 2026-02-09 04:09:14.946716 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 7.36s 2026-02-09 04:09:14.946725 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.12s 2026-02-09 04:09:14.946735 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.01s 2026-02-09 04:09:19.074596 | orchestrator | 2026-02-09 04:09:19 | INFO  | Task 0e2cfa69-55dd-42d0-9a23-3e5a49fe49b5 (horizon) was prepared for execution. 2026-02-09 04:09:19.074702 | orchestrator | 2026-02-09 04:09:19 | INFO  | It takes a moment until task 0e2cfa69-55dd-42d0-9a23-3e5a49fe49b5 (horizon) has been started and output is visible here. 2026-02-09 04:09:26.932761 | orchestrator | 2026-02-09 04:09:26.932835 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:09:26.932841 | orchestrator | 2026-02-09 04:09:26.932846 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:09:26.932851 | orchestrator | Monday 09 February 2026 04:09:23 +0000 (0:00:00.353) 0:00:00.353 ******* 2026-02-09 04:09:26.932855 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:26.932860 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:26.932864 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:26.932868 | orchestrator | 2026-02-09 04:09:26.932872 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:09:26.932876 | orchestrator | Monday 09 February 2026 04:09:23 +0000 (0:00:00.333) 0:00:00.686 ******* 2026-02-09 04:09:26.932880 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-09 04:09:26.932885 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-09 04:09:26.932888 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-09 04:09:26.932892 | orchestrator | 2026-02-09 04:09:26.932896 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-09 04:09:26.932900 | orchestrator | 2026-02-09 04:09:26.932904 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-09 04:09:26.932908 | orchestrator | Monday 09 February 2026 04:09:24 +0000 (0:00:00.484) 0:00:01.171 ******* 2026-02-09 04:09:26.932912 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:09:26.932916 | orchestrator | 2026-02-09 04:09:26.932920 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-09 04:09:26.932924 | orchestrator | Monday 09 February 2026 04:09:25 +0000 (0:00:00.610) 0:00:01.781 ******* 2026-02-09 04:09:26.932944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 04:09:26.932977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 04:09:26.932989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 04:09:26.932994 | orchestrator | 2026-02-09 04:09:26.932999 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-09 04:09:26.933003 | orchestrator | Monday 09 February 2026 04:09:26 +0000 (0:00:01.241) 0:00:03.023 ******* 2026-02-09 04:09:26.933007 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:26.933011 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:26.933015 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:26.933019 | orchestrator | 2026-02-09 04:09:26.933023 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-09 04:09:26.933027 | orchestrator | Monday 09 February 2026 04:09:26 +0000 (0:00:00.511) 0:00:03.534 ******* 2026-02-09 04:09:26.933046 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-09 04:09:33.535847 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-09 04:09:33.535946 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-09 04:09:33.535955 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-09 04:09:33.535960 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-09 04:09:33.535965 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-09 04:09:33.535970 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-09 04:09:33.535975 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-09 04:09:33.535980 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-09 04:09:33.536001 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-09 04:09:33.536007 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-09 04:09:33.536012 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-09 04:09:33.536019 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-09 04:09:33.536027 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-09 04:09:33.536034 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-09 04:09:33.536042 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-09 04:09:33.536049 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-09 04:09:33.536055 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-09 04:09:33.536063 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-09 04:09:33.536071 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-09 04:09:33.536078 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-09 04:09:33.536085 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-09 04:09:33.536091 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-09 04:09:33.536098 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-09 04:09:33.536107 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-09 04:09:33.536118 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-09 04:09:33.536125 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-09 04:09:33.536133 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-09 04:09:33.536140 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-09 04:09:33.536161 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-09 04:09:33.536169 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-09 04:09:33.536175 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-09 04:09:33.536182 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-09 04:09:33.536191 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-09 04:09:33.536198 | orchestrator | 2026-02-09 04:09:33.536205 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-09 04:09:33.536214 | orchestrator | Monday 09 February 2026 04:09:27 +0000 (0:00:00.825) 0:00:04.360 ******* 2026-02-09 04:09:33.536220 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:33.536279 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:33.536288 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:33.536304 | orchestrator | 2026-02-09 04:09:33.536311 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-09 04:09:33.536319 | orchestrator | Monday 09 February 2026 04:09:28 +0000 (0:00:00.368) 0:00:04.728 ******* 2026-02-09 04:09:33.536327 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.536335 | orchestrator | 2026-02-09 04:09:33.536361 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-09 04:09:33.536369 | orchestrator | Monday 09 February 2026 04:09:28 +0000 (0:00:00.341) 0:00:05.070 ******* 2026-02-09 04:09:33.536377 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.536383 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:33.536391 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:33.536398 | orchestrator | 2026-02-09 04:09:33.536406 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-09 04:09:33.536413 | orchestrator | Monday 09 February 2026 04:09:28 +0000 (0:00:00.328) 0:00:05.398 ******* 2026-02-09 04:09:33.536421 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:33.536430 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:33.536438 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:33.536446 | orchestrator | 2026-02-09 04:09:33.536454 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-09 04:09:33.536462 | orchestrator | Monday 09 February 2026 04:09:29 +0000 (0:00:00.340) 0:00:05.738 ******* 2026-02-09 04:09:33.536470 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.536477 | orchestrator | 2026-02-09 04:09:33.536485 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-09 04:09:33.536493 | orchestrator | Monday 09 February 2026 04:09:29 +0000 (0:00:00.146) 0:00:05.885 ******* 2026-02-09 04:09:33.536501 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.536509 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:33.536517 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:33.536525 | orchestrator | 2026-02-09 04:09:33.536533 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-09 04:09:33.536541 | orchestrator | Monday 09 February 2026 04:09:29 +0000 (0:00:00.343) 0:00:06.228 ******* 2026-02-09 04:09:33.536548 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:33.536556 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:33.536564 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:33.536572 | orchestrator | 2026-02-09 04:09:33.536580 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-09 04:09:33.536588 | orchestrator | Monday 09 February 2026 04:09:30 +0000 (0:00:00.576) 0:00:06.805 ******* 2026-02-09 04:09:33.536596 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.536604 | orchestrator | 2026-02-09 04:09:33.536611 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-09 04:09:33.536619 | orchestrator | Monday 09 February 2026 04:09:30 +0000 (0:00:00.135) 0:00:06.941 ******* 2026-02-09 04:09:33.536627 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.536636 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:33.536645 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:33.536653 | orchestrator | 2026-02-09 04:09:33.536661 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-09 04:09:33.536669 | orchestrator | Monday 09 February 2026 04:09:30 +0000 (0:00:00.333) 0:00:07.275 ******* 2026-02-09 04:09:33.536676 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:33.536685 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:33.536692 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:33.536700 | orchestrator | 2026-02-09 04:09:33.536708 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-09 04:09:33.536715 | orchestrator | Monday 09 February 2026 04:09:30 +0000 (0:00:00.368) 0:00:07.644 ******* 2026-02-09 04:09:33.536724 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.536732 | orchestrator | 2026-02-09 04:09:33.536835 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-09 04:09:33.536846 | orchestrator | Monday 09 February 2026 04:09:31 +0000 (0:00:00.143) 0:00:07.787 ******* 2026-02-09 04:09:33.536865 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.536873 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:33.536880 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:33.536887 | orchestrator | 2026-02-09 04:09:33.536894 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-09 04:09:33.536901 | orchestrator | Monday 09 February 2026 04:09:31 +0000 (0:00:00.558) 0:00:08.346 ******* 2026-02-09 04:09:33.536908 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:33.536915 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:33.536923 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:33.536930 | orchestrator | 2026-02-09 04:09:33.536937 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-09 04:09:33.536954 | orchestrator | Monday 09 February 2026 04:09:31 +0000 (0:00:00.344) 0:00:08.690 ******* 2026-02-09 04:09:33.536962 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.536968 | orchestrator | 2026-02-09 04:09:33.536976 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-09 04:09:33.536983 | orchestrator | Monday 09 February 2026 04:09:32 +0000 (0:00:00.143) 0:00:08.833 ******* 2026-02-09 04:09:33.536990 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.536997 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:33.537004 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:33.537011 | orchestrator | 2026-02-09 04:09:33.537018 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-09 04:09:33.537025 | orchestrator | Monday 09 February 2026 04:09:32 +0000 (0:00:00.353) 0:00:09.187 ******* 2026-02-09 04:09:33.537033 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:33.537039 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:33.537047 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:33.537054 | orchestrator | 2026-02-09 04:09:33.537061 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-09 04:09:33.537069 | orchestrator | Monday 09 February 2026 04:09:32 +0000 (0:00:00.341) 0:00:09.528 ******* 2026-02-09 04:09:33.537076 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.537083 | orchestrator | 2026-02-09 04:09:33.537091 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-09 04:09:33.537099 | orchestrator | Monday 09 February 2026 04:09:33 +0000 (0:00:00.382) 0:00:09.911 ******* 2026-02-09 04:09:33.537106 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:33.537113 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:33.537120 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:33.537126 | orchestrator | 2026-02-09 04:09:33.537134 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-09 04:09:33.537156 | orchestrator | Monday 09 February 2026 04:09:33 +0000 (0:00:00.343) 0:00:10.255 ******* 2026-02-09 04:09:48.135862 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:48.135962 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:48.135975 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:48.135986 | orchestrator | 2026-02-09 04:09:48.135996 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-09 04:09:48.136006 | orchestrator | Monday 09 February 2026 04:09:33 +0000 (0:00:00.403) 0:00:10.659 ******* 2026-02-09 04:09:48.136014 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:48.136024 | orchestrator | 2026-02-09 04:09:48.136033 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-09 04:09:48.136045 | orchestrator | Monday 09 February 2026 04:09:34 +0000 (0:00:00.145) 0:00:10.804 ******* 2026-02-09 04:09:48.136060 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:48.136075 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:48.136089 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:48.136103 | orchestrator | 2026-02-09 04:09:48.136119 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-09 04:09:48.136134 | orchestrator | Monday 09 February 2026 04:09:34 +0000 (0:00:00.311) 0:00:11.116 ******* 2026-02-09 04:09:48.136180 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:48.136192 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:48.136203 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:48.136258 | orchestrator | 2026-02-09 04:09:48.136273 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-09 04:09:48.136286 | orchestrator | Monday 09 February 2026 04:09:34 +0000 (0:00:00.608) 0:00:11.724 ******* 2026-02-09 04:09:48.136300 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:48.136312 | orchestrator | 2026-02-09 04:09:48.136327 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-09 04:09:48.136341 | orchestrator | Monday 09 February 2026 04:09:35 +0000 (0:00:00.160) 0:00:11.885 ******* 2026-02-09 04:09:48.136355 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:48.136370 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:48.136385 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:48.136400 | orchestrator | 2026-02-09 04:09:48.136414 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-09 04:09:48.136429 | orchestrator | Monday 09 February 2026 04:09:35 +0000 (0:00:00.356) 0:00:12.242 ******* 2026-02-09 04:09:48.136451 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:48.136467 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:48.136481 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:48.136496 | orchestrator | 2026-02-09 04:09:48.136510 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-09 04:09:48.136526 | orchestrator | Monday 09 February 2026 04:09:35 +0000 (0:00:00.389) 0:00:12.632 ******* 2026-02-09 04:09:48.136542 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:48.136557 | orchestrator | 2026-02-09 04:09:48.136573 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-09 04:09:48.136587 | orchestrator | Monday 09 February 2026 04:09:36 +0000 (0:00:00.151) 0:00:12.783 ******* 2026-02-09 04:09:48.136601 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:48.136618 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:48.136632 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:48.136647 | orchestrator | 2026-02-09 04:09:48.136656 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-09 04:09:48.136665 | orchestrator | Monday 09 February 2026 04:09:36 +0000 (0:00:00.551) 0:00:13.335 ******* 2026-02-09 04:09:48.136673 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:09:48.136682 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:09:48.136691 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:09:48.136699 | orchestrator | 2026-02-09 04:09:48.136708 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-09 04:09:48.136717 | orchestrator | Monday 09 February 2026 04:09:36 +0000 (0:00:00.347) 0:00:13.682 ******* 2026-02-09 04:09:48.136726 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:48.136734 | orchestrator | 2026-02-09 04:09:48.136743 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-09 04:09:48.136751 | orchestrator | Monday 09 February 2026 04:09:37 +0000 (0:00:00.164) 0:00:13.847 ******* 2026-02-09 04:09:48.136760 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:48.136769 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:48.136778 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:48.136786 | orchestrator | 2026-02-09 04:09:48.136814 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-09 04:09:48.136830 | orchestrator | Monday 09 February 2026 04:09:37 +0000 (0:00:00.318) 0:00:14.165 ******* 2026-02-09 04:09:48.136844 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:09:48.136859 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:09:48.136873 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:09:48.136888 | orchestrator | 2026-02-09 04:09:48.136904 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-09 04:09:48.136918 | orchestrator | Monday 09 February 2026 04:09:39 +0000 (0:00:01.880) 0:00:16.046 ******* 2026-02-09 04:09:48.136933 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-09 04:09:48.136953 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-09 04:09:48.136962 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-09 04:09:48.136971 | orchestrator | 2026-02-09 04:09:48.136979 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-09 04:09:48.136988 | orchestrator | Monday 09 February 2026 04:09:41 +0000 (0:00:01.911) 0:00:17.957 ******* 2026-02-09 04:09:48.136996 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-09 04:09:48.137006 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-09 04:09:48.137015 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-09 04:09:48.137023 | orchestrator | 2026-02-09 04:09:48.137032 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-09 04:09:48.137058 | orchestrator | Monday 09 February 2026 04:09:43 +0000 (0:00:01.842) 0:00:19.799 ******* 2026-02-09 04:09:48.137068 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-09 04:09:48.137077 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-09 04:09:48.137085 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-09 04:09:48.137094 | orchestrator | 2026-02-09 04:09:48.137103 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-09 04:09:48.137111 | orchestrator | Monday 09 February 2026 04:09:44 +0000 (0:00:01.572) 0:00:21.371 ******* 2026-02-09 04:09:48.137120 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:48.137128 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:48.137137 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:48.137146 | orchestrator | 2026-02-09 04:09:48.137154 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-09 04:09:48.137169 | orchestrator | Monday 09 February 2026 04:09:45 +0000 (0:00:00.595) 0:00:21.966 ******* 2026-02-09 04:09:48.137184 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:48.137199 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:48.137287 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:48.137304 | orchestrator | 2026-02-09 04:09:48.137318 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-09 04:09:48.137334 | orchestrator | Monday 09 February 2026 04:09:45 +0000 (0:00:00.306) 0:00:22.273 ******* 2026-02-09 04:09:48.137343 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:09:48.137352 | orchestrator | 2026-02-09 04:09:48.137361 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-09 04:09:48.137370 | orchestrator | Monday 09 February 2026 04:09:46 +0000 (0:00:00.598) 0:00:22.871 ******* 2026-02-09 04:09:48.137393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 04:09:48.137429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 04:09:48.834899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 04:09:48.835014 | orchestrator | 2026-02-09 04:09:48.835030 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-09 04:09:48.835042 | orchestrator | Monday 09 February 2026 04:09:48 +0000 (0:00:01.977) 0:00:24.848 ******* 2026-02-09 04:09:48.835072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 04:09:48.835099 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:48.835128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 04:09:48.835147 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:48.835179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 04:09:51.384374 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:51.384488 | orchestrator | 2026-02-09 04:09:51.384511 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-09 04:09:51.384526 | orchestrator | Monday 09 February 2026 04:09:48 +0000 (0:00:00.702) 0:00:25.551 ******* 2026-02-09 04:09:51.384566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 04:09:51.384586 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:09:51.384626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 04:09:51.384668 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:09:51.384684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 04:09:51.384698 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:09:51.384711 | orchestrator | 2026-02-09 04:09:51.384723 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-09 04:09:51.384773 | orchestrator | Monday 09 February 2026 04:09:49 +0000 (0:00:00.884) 0:00:26.435 ******* 2026-02-09 04:09:51.384806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 04:10:37.356557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 04:10:37.356725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 04:10:37.356773 | orchestrator | 2026-02-09 04:10:37.356792 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-09 04:10:37.356808 | orchestrator | Monday 09 February 2026 04:09:51 +0000 (0:00:01.666) 0:00:28.102 ******* 2026-02-09 04:10:37.356823 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:10:37.356838 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:10:37.356852 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:10:37.356866 | orchestrator | 2026-02-09 04:10:37.356881 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-09 04:10:37.356894 | orchestrator | Monday 09 February 2026 04:09:51 +0000 (0:00:00.347) 0:00:28.449 ******* 2026-02-09 04:10:37.356909 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:10:37.356923 | orchestrator | 2026-02-09 04:10:37.356937 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-09 04:10:37.356949 | orchestrator | Monday 09 February 2026 04:09:52 +0000 (0:00:00.569) 0:00:29.019 ******* 2026-02-09 04:10:37.356962 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:10:37.356975 | orchestrator | 2026-02-09 04:10:37.356986 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-09 04:10:37.356999 | orchestrator | Monday 09 February 2026 04:09:54 +0000 (0:00:02.126) 0:00:31.145 ******* 2026-02-09 04:10:37.357011 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:10:37.357024 | orchestrator | 2026-02-09 04:10:37.357056 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-09 04:10:37.357079 | orchestrator | Monday 09 February 2026 04:09:56 +0000 (0:00:02.578) 0:00:33.723 ******* 2026-02-09 04:10:37.357092 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:10:37.357106 | orchestrator | 2026-02-09 04:10:37.357119 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-09 04:10:37.357132 | orchestrator | Monday 09 February 2026 04:10:11 +0000 (0:00:14.911) 0:00:48.634 ******* 2026-02-09 04:10:37.357186 | orchestrator | 2026-02-09 04:10:37.357200 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-09 04:10:37.357213 | orchestrator | Monday 09 February 2026 04:10:11 +0000 (0:00:00.084) 0:00:48.719 ******* 2026-02-09 04:10:37.357226 | orchestrator | 2026-02-09 04:10:37.357239 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-09 04:10:37.357251 | orchestrator | Monday 09 February 2026 04:10:12 +0000 (0:00:00.086) 0:00:48.806 ******* 2026-02-09 04:10:37.357263 | orchestrator | 2026-02-09 04:10:37.357277 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-09 04:10:37.357288 | orchestrator | Monday 09 February 2026 04:10:12 +0000 (0:00:00.083) 0:00:48.889 ******* 2026-02-09 04:10:37.357300 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:10:37.357312 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:10:37.357325 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:10:37.357337 | orchestrator | 2026-02-09 04:10:37.357350 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:10:37.357364 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-09 04:10:37.357379 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-09 04:10:37.357390 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-09 04:10:37.357404 | orchestrator | 2026-02-09 04:10:37.357417 | orchestrator | 2026-02-09 04:10:37.357429 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:10:37.357440 | orchestrator | Monday 09 February 2026 04:10:37 +0000 (0:00:25.163) 0:01:14.053 ******* 2026-02-09 04:10:37.357452 | orchestrator | =============================================================================== 2026-02-09 04:10:37.357464 | orchestrator | horizon : Restart horizon container ------------------------------------ 25.16s 2026-02-09 04:10:37.357477 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.91s 2026-02-09 04:10:37.357489 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.58s 2026-02-09 04:10:37.357501 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.13s 2026-02-09 04:10:37.357514 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.98s 2026-02-09 04:10:37.357528 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.91s 2026-02-09 04:10:37.357541 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.88s 2026-02-09 04:10:37.357564 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.84s 2026-02-09 04:10:37.357578 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.67s 2026-02-09 04:10:37.357590 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.57s 2026-02-09 04:10:37.357604 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.24s 2026-02-09 04:10:37.357617 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.88s 2026-02-09 04:10:37.357630 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2026-02-09 04:10:37.357659 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.70s 2026-02-09 04:10:37.800069 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-02-09 04:10:37.800198 | orchestrator | horizon : Update policy file name --------------------------------------- 0.61s 2026-02-09 04:10:37.800208 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.60s 2026-02-09 04:10:37.800214 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.60s 2026-02-09 04:10:37.800237 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2026-02-09 04:10:37.800242 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-02-09 04:10:40.279832 | orchestrator | 2026-02-09 04:10:40 | INFO  | Task b591a131-10ce-44ea-928a-4670fbccb26b (skyline) was prepared for execution. 2026-02-09 04:10:40.279925 | orchestrator | 2026-02-09 04:10:40 | INFO  | It takes a moment until task b591a131-10ce-44ea-928a-4670fbccb26b (skyline) has been started and output is visible here. 2026-02-09 04:11:09.961772 | orchestrator | 2026-02-09 04:11:09.961870 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:11:09.961881 | orchestrator | 2026-02-09 04:11:09.961889 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:11:09.961897 | orchestrator | Monday 09 February 2026 04:10:44 +0000 (0:00:00.275) 0:00:00.275 ******* 2026-02-09 04:11:09.961904 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:11:09.961912 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:11:09.961919 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:11:09.961926 | orchestrator | 2026-02-09 04:11:09.961933 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:11:09.961940 | orchestrator | Monday 09 February 2026 04:10:44 +0000 (0:00:00.328) 0:00:00.604 ******* 2026-02-09 04:11:09.961947 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-09 04:11:09.961954 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-09 04:11:09.961961 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-09 04:11:09.961968 | orchestrator | 2026-02-09 04:11:09.961975 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-09 04:11:09.961982 | orchestrator | 2026-02-09 04:11:09.961988 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-09 04:11:09.961995 | orchestrator | Monday 09 February 2026 04:10:45 +0000 (0:00:00.490) 0:00:01.094 ******* 2026-02-09 04:11:09.962067 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:11:09.962076 | orchestrator | 2026-02-09 04:11:09.962082 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-09 04:11:09.962089 | orchestrator | Monday 09 February 2026 04:10:46 +0000 (0:00:00.622) 0:00:01.716 ******* 2026-02-09 04:11:09.962147 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-09 04:11:09.962160 | orchestrator | 2026-02-09 04:11:09.962171 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-09 04:11:09.962183 | orchestrator | Monday 09 February 2026 04:10:49 +0000 (0:00:03.165) 0:00:04.881 ******* 2026-02-09 04:11:09.962194 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-09 04:11:09.962207 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-09 04:11:09.962220 | orchestrator | 2026-02-09 04:11:09.962231 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-09 04:11:09.962243 | orchestrator | Monday 09 February 2026 04:10:55 +0000 (0:00:06.055) 0:00:10.937 ******* 2026-02-09 04:11:09.962255 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 04:11:09.962268 | orchestrator | 2026-02-09 04:11:09.962281 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-09 04:11:09.962293 | orchestrator | Monday 09 February 2026 04:10:58 +0000 (0:00:02.979) 0:00:13.916 ******* 2026-02-09 04:11:09.962304 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:11:09.962312 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-09 04:11:09.962319 | orchestrator | 2026-02-09 04:11:09.962327 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-09 04:11:09.962336 | orchestrator | Monday 09 February 2026 04:11:02 +0000 (0:00:03.775) 0:00:17.692 ******* 2026-02-09 04:11:09.962366 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 04:11:09.962374 | orchestrator | 2026-02-09 04:11:09.962382 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-09 04:11:09.962390 | orchestrator | Monday 09 February 2026 04:11:05 +0000 (0:00:02.983) 0:00:20.676 ******* 2026-02-09 04:11:09.962398 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-09 04:11:09.962406 | orchestrator | 2026-02-09 04:11:09.962414 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-09 04:11:09.962433 | orchestrator | Monday 09 February 2026 04:11:08 +0000 (0:00:03.603) 0:00:24.280 ******* 2026-02-09 04:11:09.962445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:09.962475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:09.962491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:09.962500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:09.962522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:09.962537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:13.708146 | orchestrator | 2026-02-09 04:11:13.708244 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-09 04:11:13.708262 | orchestrator | Monday 09 February 2026 04:11:09 +0000 (0:00:01.282) 0:00:25.563 ******* 2026-02-09 04:11:13.708274 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:11:13.708286 | orchestrator | 2026-02-09 04:11:13.708298 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-09 04:11:13.708309 | orchestrator | Monday 09 February 2026 04:11:10 +0000 (0:00:00.736) 0:00:26.299 ******* 2026-02-09 04:11:13.708324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:13.708339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:13.708390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:13.708421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:13.708435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:13.708447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:13.708466 | orchestrator | 2026-02-09 04:11:13.708479 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-09 04:11:13.708490 | orchestrator | Monday 09 February 2026 04:11:13 +0000 (0:00:02.362) 0:00:28.661 ******* 2026-02-09 04:11:13.708534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-09 04:11:13.708548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-09 04:11:13.708560 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:11:13.708582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-09 04:11:15.057281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-09 04:11:15.057395 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:11:15.057435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-09 04:11:15.057456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-09 04:11:15.057467 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:11:15.057477 | orchestrator | 2026-02-09 04:11:15.057487 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-09 04:11:15.057498 | orchestrator | Monday 09 February 2026 04:11:13 +0000 (0:00:00.658) 0:00:29.319 ******* 2026-02-09 04:11:15.057507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-09 04:11:15.057534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-09 04:11:15.057553 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:11:15.057569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-09 04:11:15.057579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-09 04:11:15.057589 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:11:15.057599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-09 04:11:15.057616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-09 04:11:23.878242 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:11:23.878341 | orchestrator | 2026-02-09 04:11:23.878357 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-09 04:11:23.878367 | orchestrator | Monday 09 February 2026 04:11:15 +0000 (0:00:01.344) 0:00:30.664 ******* 2026-02-09 04:11:23.878389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:23.878398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:23.878404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:23.878426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:23.878446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:23.878462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:23.878468 | orchestrator | 2026-02-09 04:11:23.878474 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-09 04:11:23.878479 | orchestrator | Monday 09 February 2026 04:11:17 +0000 (0:00:02.555) 0:00:33.220 ******* 2026-02-09 04:11:23.878484 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-09 04:11:23.878490 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-09 04:11:23.878495 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-09 04:11:23.878500 | orchestrator | 2026-02-09 04:11:23.878505 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-09 04:11:23.878510 | orchestrator | Monday 09 February 2026 04:11:19 +0000 (0:00:01.553) 0:00:34.774 ******* 2026-02-09 04:11:23.878515 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-09 04:11:23.878520 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-09 04:11:23.878525 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-09 04:11:23.878535 | orchestrator | 2026-02-09 04:11:23.878540 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-09 04:11:23.878545 | orchestrator | Monday 09 February 2026 04:11:21 +0000 (0:00:02.247) 0:00:37.021 ******* 2026-02-09 04:11:23.878551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:23.878562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:25.970327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:25.970431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:25.970469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:25.970481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:25.970492 | orchestrator | 2026-02-09 04:11:25.970505 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-09 04:11:25.970516 | orchestrator | Monday 09 February 2026 04:11:23 +0000 (0:00:02.467) 0:00:39.489 ******* 2026-02-09 04:11:25.970526 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:11:25.970537 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:11:25.970547 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:11:25.970556 | orchestrator | 2026-02-09 04:11:25.970582 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-09 04:11:25.970595 | orchestrator | Monday 09 February 2026 04:11:24 +0000 (0:00:00.326) 0:00:39.816 ******* 2026-02-09 04:11:25.970619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:25.970638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:25.970667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:25.970685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:25.970720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:55.125468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-09 04:11:55.125607 | orchestrator | 2026-02-09 04:11:55.125619 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-09 04:11:55.125627 | orchestrator | Monday 09 February 2026 04:11:25 +0000 (0:00:01.755) 0:00:41.571 ******* 2026-02-09 04:11:55.125635 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:11:55.125642 | orchestrator | 2026-02-09 04:11:55.125649 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-09 04:11:55.125656 | orchestrator | Monday 09 February 2026 04:11:28 +0000 (0:00:02.076) 0:00:43.647 ******* 2026-02-09 04:11:55.125660 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:11:55.125664 | orchestrator | 2026-02-09 04:11:55.125668 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-09 04:11:55.125672 | orchestrator | Monday 09 February 2026 04:11:30 +0000 (0:00:02.100) 0:00:45.748 ******* 2026-02-09 04:11:55.125676 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:11:55.125680 | orchestrator | 2026-02-09 04:11:55.125684 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-09 04:11:55.125688 | orchestrator | Monday 09 February 2026 04:11:37 +0000 (0:00:07.443) 0:00:53.192 ******* 2026-02-09 04:11:55.125692 | orchestrator | 2026-02-09 04:11:55.125696 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-09 04:11:55.125700 | orchestrator | Monday 09 February 2026 04:11:37 +0000 (0:00:00.070) 0:00:53.262 ******* 2026-02-09 04:11:55.125704 | orchestrator | 2026-02-09 04:11:55.125707 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-09 04:11:55.125711 | orchestrator | Monday 09 February 2026 04:11:37 +0000 (0:00:00.071) 0:00:53.334 ******* 2026-02-09 04:11:55.125715 | orchestrator | 2026-02-09 04:11:55.125719 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-09 04:11:55.125722 | orchestrator | Monday 09 February 2026 04:11:37 +0000 (0:00:00.080) 0:00:53.414 ******* 2026-02-09 04:11:55.125726 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:11:55.125730 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:11:55.125734 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:11:55.125737 | orchestrator | 2026-02-09 04:11:55.125741 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-09 04:11:55.125745 | orchestrator | Monday 09 February 2026 04:11:45 +0000 (0:00:08.109) 0:01:01.524 ******* 2026-02-09 04:11:55.125749 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:11:55.125752 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:11:55.125756 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:11:55.125760 | orchestrator | 2026-02-09 04:11:55.125764 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:11:55.125768 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 04:11:55.125774 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 04:11:55.125778 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 04:11:55.125781 | orchestrator | 2026-02-09 04:11:55.125785 | orchestrator | 2026-02-09 04:11:55.125789 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:11:55.125793 | orchestrator | Monday 09 February 2026 04:11:54 +0000 (0:00:08.806) 0:01:10.330 ******* 2026-02-09 04:11:55.125796 | orchestrator | =============================================================================== 2026-02-09 04:11:55.125800 | orchestrator | skyline : Restart skyline-console container ----------------------------- 8.81s 2026-02-09 04:11:55.125809 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 8.11s 2026-02-09 04:11:55.125813 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.44s 2026-02-09 04:11:55.125817 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.06s 2026-02-09 04:11:55.125821 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.78s 2026-02-09 04:11:55.125835 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.60s 2026-02-09 04:11:55.125839 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.17s 2026-02-09 04:11:55.125843 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 2.98s 2026-02-09 04:11:55.125860 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 2.98s 2026-02-09 04:11:55.125864 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.56s 2026-02-09 04:11:55.125868 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.47s 2026-02-09 04:11:55.125871 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.36s 2026-02-09 04:11:55.125875 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.25s 2026-02-09 04:11:55.125879 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.10s 2026-02-09 04:11:55.125883 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.08s 2026-02-09 04:11:55.125886 | orchestrator | skyline : Check skyline container --------------------------------------- 1.76s 2026-02-09 04:11:55.125890 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.55s 2026-02-09 04:11:55.125894 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.34s 2026-02-09 04:11:55.125898 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.28s 2026-02-09 04:11:55.125901 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.74s 2026-02-09 04:11:57.641847 | orchestrator | 2026-02-09 04:11:57 | INFO  | Task 89ddb2c7-c965-485f-a163-ccfaa89d0002 (glance) was prepared for execution. 2026-02-09 04:11:57.641922 | orchestrator | 2026-02-09 04:11:57 | INFO  | It takes a moment until task 89ddb2c7-c965-485f-a163-ccfaa89d0002 (glance) has been started and output is visible here. 2026-02-09 04:12:30.590338 | orchestrator | 2026-02-09 04:12:30.590418 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:12:30.590427 | orchestrator | 2026-02-09 04:12:30.590433 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:12:30.590439 | orchestrator | Monday 09 February 2026 04:12:02 +0000 (0:00:00.304) 0:00:00.304 ******* 2026-02-09 04:12:30.590445 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:12:30.590452 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:12:30.590457 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:12:30.590462 | orchestrator | 2026-02-09 04:12:30.590468 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:12:30.590473 | orchestrator | Monday 09 February 2026 04:12:02 +0000 (0:00:00.343) 0:00:00.648 ******* 2026-02-09 04:12:30.590479 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-09 04:12:30.590484 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-09 04:12:30.590490 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-09 04:12:30.590495 | orchestrator | 2026-02-09 04:12:30.590500 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-09 04:12:30.590505 | orchestrator | 2026-02-09 04:12:30.590510 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-09 04:12:30.590516 | orchestrator | Monday 09 February 2026 04:12:03 +0000 (0:00:00.482) 0:00:01.130 ******* 2026-02-09 04:12:30.590521 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:12:30.590544 | orchestrator | 2026-02-09 04:12:30.590550 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-09 04:12:30.590555 | orchestrator | Monday 09 February 2026 04:12:03 +0000 (0:00:00.587) 0:00:01.718 ******* 2026-02-09 04:12:30.590560 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-09 04:12:30.590565 | orchestrator | 2026-02-09 04:12:30.590570 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-09 04:12:30.590575 | orchestrator | Monday 09 February 2026 04:12:06 +0000 (0:00:03.184) 0:00:04.902 ******* 2026-02-09 04:12:30.590580 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-09 04:12:30.590586 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-09 04:12:30.590591 | orchestrator | 2026-02-09 04:12:30.590596 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-09 04:12:30.590601 | orchestrator | Monday 09 February 2026 04:12:12 +0000 (0:00:05.941) 0:00:10.843 ******* 2026-02-09 04:12:30.590606 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 04:12:30.590613 | orchestrator | 2026-02-09 04:12:30.590618 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-09 04:12:30.590623 | orchestrator | Monday 09 February 2026 04:12:15 +0000 (0:00:03.019) 0:00:13.863 ******* 2026-02-09 04:12:30.590629 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:12:30.590634 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-09 04:12:30.590639 | orchestrator | 2026-02-09 04:12:30.590644 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-09 04:12:30.590649 | orchestrator | Monday 09 February 2026 04:12:19 +0000 (0:00:03.765) 0:00:17.628 ******* 2026-02-09 04:12:30.590655 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 04:12:30.590660 | orchestrator | 2026-02-09 04:12:30.590665 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-09 04:12:30.590670 | orchestrator | Monday 09 February 2026 04:12:22 +0000 (0:00:03.007) 0:00:20.635 ******* 2026-02-09 04:12:30.590675 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-09 04:12:30.590680 | orchestrator | 2026-02-09 04:12:30.590696 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-09 04:12:30.590701 | orchestrator | Monday 09 February 2026 04:12:26 +0000 (0:00:03.621) 0:00:24.257 ******* 2026-02-09 04:12:30.590724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:12:30.590738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:12:30.590748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:12:30.590754 | orchestrator | 2026-02-09 04:12:30.590759 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-09 04:12:30.590765 | orchestrator | Monday 09 February 2026 04:12:29 +0000 (0:00:03.590) 0:00:27.848 ******* 2026-02-09 04:12:30.590770 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:12:30.590776 | orchestrator | 2026-02-09 04:12:30.590785 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-09 04:12:46.850919 | orchestrator | Monday 09 February 2026 04:12:30 +0000 (0:00:00.781) 0:00:28.629 ******* 2026-02-09 04:12:46.851103 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:12:46.851123 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:12:46.851135 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:12:46.851146 | orchestrator | 2026-02-09 04:12:46.851158 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-09 04:12:46.851170 | orchestrator | Monday 09 February 2026 04:12:34 +0000 (0:00:03.733) 0:00:32.362 ******* 2026-02-09 04:12:46.851182 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-09 04:12:46.851195 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-09 04:12:46.851206 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-09 04:12:46.851217 | orchestrator | 2026-02-09 04:12:46.851228 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-09 04:12:46.851238 | orchestrator | Monday 09 February 2026 04:12:36 +0000 (0:00:01.691) 0:00:34.053 ******* 2026-02-09 04:12:46.851249 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-09 04:12:46.851260 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-09 04:12:46.851271 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-09 04:12:46.851282 | orchestrator | 2026-02-09 04:12:46.851292 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-09 04:12:46.851303 | orchestrator | Monday 09 February 2026 04:12:37 +0000 (0:00:01.405) 0:00:35.459 ******* 2026-02-09 04:12:46.851312 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:12:46.851319 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:12:46.851325 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:12:46.851332 | orchestrator | 2026-02-09 04:12:46.851338 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-09 04:12:46.851344 | orchestrator | Monday 09 February 2026 04:12:38 +0000 (0:00:00.682) 0:00:36.142 ******* 2026-02-09 04:12:46.851350 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:12:46.851356 | orchestrator | 2026-02-09 04:12:46.851362 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-09 04:12:46.851369 | orchestrator | Monday 09 February 2026 04:12:38 +0000 (0:00:00.147) 0:00:36.289 ******* 2026-02-09 04:12:46.851375 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:12:46.851381 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:12:46.851387 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:12:46.851393 | orchestrator | 2026-02-09 04:12:46.851399 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-09 04:12:46.851405 | orchestrator | Monday 09 February 2026 04:12:38 +0000 (0:00:00.314) 0:00:36.604 ******* 2026-02-09 04:12:46.851411 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:12:46.851417 | orchestrator | 2026-02-09 04:12:46.851424 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-09 04:12:46.851430 | orchestrator | Monday 09 February 2026 04:12:39 +0000 (0:00:00.799) 0:00:37.403 ******* 2026-02-09 04:12:46.851453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:12:46.851498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:12:46.851512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:12:46.851526 | orchestrator | 2026-02-09 04:12:46.851534 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-09 04:12:46.851541 | orchestrator | Monday 09 February 2026 04:12:43 +0000 (0:00:04.079) 0:00:41.483 ******* 2026-02-09 04:12:46.851555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 04:12:51.040359 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:12:51.040473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 04:12:51.040513 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:12:51.040526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 04:12:51.040536 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:12:51.040547 | orchestrator | 2026-02-09 04:12:51.040558 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-09 04:12:51.040570 | orchestrator | Monday 09 February 2026 04:12:46 +0000 (0:00:03.409) 0:00:44.893 ******* 2026-02-09 04:12:51.040600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 04:12:51.040619 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:12:51.040635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 04:12:51.040645 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:12:51.040664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 04:13:28.084229 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:13:28.084342 | orchestrator | 2026-02-09 04:13:28.084360 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-09 04:13:28.084373 | orchestrator | Monday 09 February 2026 04:12:51 +0000 (0:00:04.187) 0:00:49.081 ******* 2026-02-09 04:13:28.084383 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:13:28.084393 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:13:28.084402 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:13:28.084435 | orchestrator | 2026-02-09 04:13:28.084443 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-09 04:13:28.084450 | orchestrator | Monday 09 February 2026 04:12:54 +0000 (0:00:03.402) 0:00:52.483 ******* 2026-02-09 04:13:28.084471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:13:28.084481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:13:28.084509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:13:28.084522 | orchestrator | 2026-02-09 04:13:28.084528 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-09 04:13:28.084535 | orchestrator | Monday 09 February 2026 04:12:58 +0000 (0:00:04.036) 0:00:56.520 ******* 2026-02-09 04:13:28.084540 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:13:28.084546 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:13:28.084552 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:13:28.084558 | orchestrator | 2026-02-09 04:13:28.084564 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-09 04:13:28.084569 | orchestrator | Monday 09 February 2026 04:13:04 +0000 (0:00:05.920) 0:01:02.440 ******* 2026-02-09 04:13:28.084575 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:13:28.084581 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:13:28.084587 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:13:28.084592 | orchestrator | 2026-02-09 04:13:28.084598 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-09 04:13:28.084604 | orchestrator | Monday 09 February 2026 04:13:08 +0000 (0:00:04.106) 0:01:06.547 ******* 2026-02-09 04:13:28.084610 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:13:28.084615 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:13:28.084621 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:13:28.084627 | orchestrator | 2026-02-09 04:13:28.084633 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-09 04:13:28.084638 | orchestrator | Monday 09 February 2026 04:13:12 +0000 (0:00:03.734) 0:01:10.281 ******* 2026-02-09 04:13:28.084644 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:13:28.084650 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:13:28.084655 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:13:28.084661 | orchestrator | 2026-02-09 04:13:28.084667 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-09 04:13:28.084673 | orchestrator | Monday 09 February 2026 04:13:15 +0000 (0:00:03.576) 0:01:13.858 ******* 2026-02-09 04:13:28.084679 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:13:28.084684 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:13:28.084690 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:13:28.084696 | orchestrator | 2026-02-09 04:13:28.084701 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-09 04:13:28.084707 | orchestrator | Monday 09 February 2026 04:13:19 +0000 (0:00:03.755) 0:01:17.614 ******* 2026-02-09 04:13:28.084713 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:13:28.084719 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:13:28.084725 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:13:28.084730 | orchestrator | 2026-02-09 04:13:28.084740 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-09 04:13:28.084746 | orchestrator | Monday 09 February 2026 04:13:20 +0000 (0:00:00.616) 0:01:18.231 ******* 2026-02-09 04:13:28.084752 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-09 04:13:28.084759 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:13:28.084765 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-09 04:13:28.084770 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:13:28.084776 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-09 04:13:28.084782 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:13:28.084788 | orchestrator | 2026-02-09 04:13:28.084793 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-09 04:13:28.084799 | orchestrator | Monday 09 February 2026 04:13:23 +0000 (0:00:03.430) 0:01:21.661 ******* 2026-02-09 04:13:28.084805 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:13:28.084811 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:13:28.084816 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:13:28.084822 | orchestrator | 2026-02-09 04:13:28.084828 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-09 04:13:28.084838 | orchestrator | Monday 09 February 2026 04:13:28 +0000 (0:00:04.464) 0:01:26.125 ******* 2026-02-09 04:14:39.309171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:14:39.309247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:14:39.309281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 04:14:39.309287 | orchestrator | 2026-02-09 04:14:39.309293 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-09 04:14:39.309298 | orchestrator | Monday 09 February 2026 04:13:32 +0000 (0:00:03.930) 0:01:30.056 ******* 2026-02-09 04:14:39.309302 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:14:39.309307 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:14:39.309310 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:14:39.309314 | orchestrator | 2026-02-09 04:14:39.309318 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-09 04:14:39.309322 | orchestrator | Monday 09 February 2026 04:13:32 +0000 (0:00:00.575) 0:01:30.632 ******* 2026-02-09 04:14:39.309326 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:14:39.309330 | orchestrator | 2026-02-09 04:14:39.309334 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-09 04:14:39.309337 | orchestrator | Monday 09 February 2026 04:13:34 +0000 (0:00:02.041) 0:01:32.673 ******* 2026-02-09 04:14:39.309341 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:14:39.309345 | orchestrator | 2026-02-09 04:14:39.309349 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-09 04:14:39.309353 | orchestrator | Monday 09 February 2026 04:13:36 +0000 (0:00:02.072) 0:01:34.746 ******* 2026-02-09 04:14:39.309357 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:14:39.309360 | orchestrator | 2026-02-09 04:14:39.309364 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-09 04:14:39.309372 | orchestrator | Monday 09 February 2026 04:13:38 +0000 (0:00:01.930) 0:01:36.676 ******* 2026-02-09 04:14:39.309376 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:14:39.309380 | orchestrator | 2026-02-09 04:14:39.309383 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-09 04:14:39.309387 | orchestrator | Monday 09 February 2026 04:14:05 +0000 (0:00:27.133) 0:02:03.810 ******* 2026-02-09 04:14:39.309391 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:14:39.309395 | orchestrator | 2026-02-09 04:14:39.309398 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-09 04:14:39.309402 | orchestrator | Monday 09 February 2026 04:14:07 +0000 (0:00:01.996) 0:02:05.807 ******* 2026-02-09 04:14:39.309406 | orchestrator | 2026-02-09 04:14:39.309410 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-09 04:14:39.309414 | orchestrator | Monday 09 February 2026 04:14:07 +0000 (0:00:00.069) 0:02:05.877 ******* 2026-02-09 04:14:39.309417 | orchestrator | 2026-02-09 04:14:39.309421 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-09 04:14:39.309425 | orchestrator | Monday 09 February 2026 04:14:07 +0000 (0:00:00.071) 0:02:05.949 ******* 2026-02-09 04:14:39.309428 | orchestrator | 2026-02-09 04:14:39.309432 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-09 04:14:39.309436 | orchestrator | Monday 09 February 2026 04:14:07 +0000 (0:00:00.071) 0:02:06.021 ******* 2026-02-09 04:14:39.309440 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:14:39.309443 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:14:39.309447 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:14:39.309451 | orchestrator | 2026-02-09 04:14:39.309455 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:14:39.309459 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-09 04:14:39.309465 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-09 04:14:39.309469 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-09 04:14:39.309472 | orchestrator | 2026-02-09 04:14:39.309476 | orchestrator | 2026-02-09 04:14:39.309480 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:14:39.309484 | orchestrator | Monday 09 February 2026 04:14:39 +0000 (0:00:31.318) 0:02:37.339 ******* 2026-02-09 04:14:39.309487 | orchestrator | =============================================================================== 2026-02-09 04:14:39.309491 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.32s 2026-02-09 04:14:39.309495 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.13s 2026-02-09 04:14:39.309499 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.94s 2026-02-09 04:14:39.309505 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.92s 2026-02-09 04:14:39.721129 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.46s 2026-02-09 04:14:39.721222 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.19s 2026-02-09 04:14:39.721232 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.11s 2026-02-09 04:14:39.721238 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.08s 2026-02-09 04:14:39.721244 | orchestrator | glance : Copying over config.json files for services -------------------- 4.04s 2026-02-09 04:14:39.721250 | orchestrator | glance : Check glance containers ---------------------------------------- 3.93s 2026-02-09 04:14:39.721255 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.77s 2026-02-09 04:14:39.721262 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.76s 2026-02-09 04:14:39.721306 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.73s 2026-02-09 04:14:39.721314 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.73s 2026-02-09 04:14:39.721320 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.62s 2026-02-09 04:14:39.721325 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.59s 2026-02-09 04:14:39.721331 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.58s 2026-02-09 04:14:39.721337 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.43s 2026-02-09 04:14:39.721343 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.41s 2026-02-09 04:14:39.721350 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.40s 2026-02-09 04:14:42.205047 | orchestrator | 2026-02-09 04:14:42 | INFO  | Task 3d55582b-0af7-409e-ae31-5866ec6992e3 (cinder) was prepared for execution. 2026-02-09 04:14:42.205123 | orchestrator | 2026-02-09 04:14:42 | INFO  | It takes a moment until task 3d55582b-0af7-409e-ae31-5866ec6992e3 (cinder) has been started and output is visible here. 2026-02-09 04:15:15.146243 | orchestrator | 2026-02-09 04:15:15.146397 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:15:15.146414 | orchestrator | 2026-02-09 04:15:15.146426 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:15:15.146438 | orchestrator | Monday 09 February 2026 04:14:46 +0000 (0:00:00.288) 0:00:00.288 ******* 2026-02-09 04:15:15.146449 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:15:15.146461 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:15:15.146472 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:15:15.146483 | orchestrator | 2026-02-09 04:15:15.146494 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:15:15.146505 | orchestrator | Monday 09 February 2026 04:14:46 +0000 (0:00:00.337) 0:00:00.626 ******* 2026-02-09 04:15:15.146515 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-09 04:15:15.146527 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-09 04:15:15.146537 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-09 04:15:15.146548 | orchestrator | 2026-02-09 04:15:15.146559 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-09 04:15:15.146570 | orchestrator | 2026-02-09 04:15:15.146581 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-09 04:15:15.146591 | orchestrator | Monday 09 February 2026 04:14:47 +0000 (0:00:00.478) 0:00:01.104 ******* 2026-02-09 04:15:15.146603 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:15:15.146614 | orchestrator | 2026-02-09 04:15:15.146625 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-09 04:15:15.146636 | orchestrator | Monday 09 February 2026 04:14:48 +0000 (0:00:00.556) 0:00:01.660 ******* 2026-02-09 04:15:15.146647 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-09 04:15:15.146657 | orchestrator | 2026-02-09 04:15:15.146668 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-09 04:15:15.146680 | orchestrator | Monday 09 February 2026 04:14:50 +0000 (0:00:02.920) 0:00:04.581 ******* 2026-02-09 04:15:15.146692 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-09 04:15:15.146703 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-09 04:15:15.146714 | orchestrator | 2026-02-09 04:15:15.146725 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-09 04:15:15.146735 | orchestrator | Monday 09 February 2026 04:14:56 +0000 (0:00:05.711) 0:00:10.292 ******* 2026-02-09 04:15:15.146746 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 04:15:15.146781 | orchestrator | 2026-02-09 04:15:15.146793 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-09 04:15:15.146803 | orchestrator | Monday 09 February 2026 04:14:59 +0000 (0:00:02.918) 0:00:13.211 ******* 2026-02-09 04:15:15.146844 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:15:15.146857 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-09 04:15:15.146868 | orchestrator | 2026-02-09 04:15:15.146879 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-09 04:15:15.146889 | orchestrator | Monday 09 February 2026 04:15:03 +0000 (0:00:03.838) 0:00:17.049 ******* 2026-02-09 04:15:15.146900 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 04:15:15.146911 | orchestrator | 2026-02-09 04:15:15.146926 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-09 04:15:15.146949 | orchestrator | Monday 09 February 2026 04:15:06 +0000 (0:00:02.964) 0:00:20.013 ******* 2026-02-09 04:15:15.146976 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-09 04:15:15.146994 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-09 04:15:15.147010 | orchestrator | 2026-02-09 04:15:15.147027 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-09 04:15:15.147044 | orchestrator | Monday 09 February 2026 04:15:13 +0000 (0:00:06.810) 0:00:26.823 ******* 2026-02-09 04:15:15.147088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:15.147138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:15.147159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:15.147191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:15.147211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:15.147237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:15.147257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:15.147287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:21.348738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:21.348947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:21.348967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:21.348994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:21.349007 | orchestrator | 2026-02-09 04:15:21.349020 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-09 04:15:21.349032 | orchestrator | Monday 09 February 2026 04:15:15 +0000 (0:00:02.058) 0:00:28.882 ******* 2026-02-09 04:15:21.349044 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:15:21.349056 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:15:21.349067 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:15:21.349077 | orchestrator | 2026-02-09 04:15:21.349089 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-09 04:15:21.349099 | orchestrator | Monday 09 February 2026 04:15:15 +0000 (0:00:00.536) 0:00:29.418 ******* 2026-02-09 04:15:21.349111 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:15:21.349122 | orchestrator | 2026-02-09 04:15:21.349132 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-09 04:15:21.349143 | orchestrator | Monday 09 February 2026 04:15:16 +0000 (0:00:00.600) 0:00:30.018 ******* 2026-02-09 04:15:21.349155 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-09 04:15:21.349166 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-09 04:15:21.349177 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-09 04:15:21.349188 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-09 04:15:21.349199 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-09 04:15:21.349209 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-09 04:15:21.349228 | orchestrator | 2026-02-09 04:15:21.349239 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-09 04:15:21.349250 | orchestrator | Monday 09 February 2026 04:15:18 +0000 (0:00:01.792) 0:00:31.810 ******* 2026-02-09 04:15:21.349281 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-09 04:15:21.349297 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-09 04:15:21.349318 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-09 04:15:21.349332 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-09 04:15:21.349369 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-09 04:15:32.385924 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-09 04:15:32.386007 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-09 04:15:32.386055 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-09 04:15:32.386074 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-09 04:15:32.386080 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-09 04:15:32.386181 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-09 04:15:32.386195 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-09 04:15:32.386204 | orchestrator | 2026-02-09 04:15:32.386212 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-09 04:15:32.386220 | orchestrator | Monday 09 February 2026 04:15:21 +0000 (0:00:03.531) 0:00:35.341 ******* 2026-02-09 04:15:32.386227 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-09 04:15:32.386235 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-09 04:15:32.386242 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-09 04:15:32.386248 | orchestrator | 2026-02-09 04:15:32.386255 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-09 04:15:32.386262 | orchestrator | Monday 09 February 2026 04:15:23 +0000 (0:00:01.590) 0:00:36.932 ******* 2026-02-09 04:15:32.386272 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-09 04:15:32.386279 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-09 04:15:32.386285 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-09 04:15:32.386292 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-09 04:15:32.386299 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-09 04:15:32.386306 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-09 04:15:32.386313 | orchestrator | 2026-02-09 04:15:32.386326 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-09 04:15:32.386333 | orchestrator | Monday 09 February 2026 04:15:26 +0000 (0:00:02.783) 0:00:39.716 ******* 2026-02-09 04:15:32.386341 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-09 04:15:32.386349 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-09 04:15:32.386357 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-09 04:15:32.386362 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-09 04:15:32.386372 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-09 04:15:32.386377 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-09 04:15:32.386381 | orchestrator | 2026-02-09 04:15:32.386385 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-09 04:15:32.386390 | orchestrator | Monday 09 February 2026 04:15:27 +0000 (0:00:01.069) 0:00:40.786 ******* 2026-02-09 04:15:32.386394 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:15:32.386399 | orchestrator | 2026-02-09 04:15:32.386403 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-09 04:15:32.386408 | orchestrator | Monday 09 February 2026 04:15:27 +0000 (0:00:00.150) 0:00:40.937 ******* 2026-02-09 04:15:32.386412 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:15:32.386416 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:15:32.386420 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:15:32.386425 | orchestrator | 2026-02-09 04:15:32.386429 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-09 04:15:32.386433 | orchestrator | Monday 09 February 2026 04:15:27 +0000 (0:00:00.548) 0:00:41.485 ******* 2026-02-09 04:15:32.386438 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:15:32.386443 | orchestrator | 2026-02-09 04:15:32.386448 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-09 04:15:32.386453 | orchestrator | Monday 09 February 2026 04:15:28 +0000 (0:00:00.597) 0:00:42.083 ******* 2026-02-09 04:15:32.386466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:33.398171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:33.398273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:33.398327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:33.398340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:33.398349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:33.398379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:33.398390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:33.398400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:33.398430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:33.398442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:33.398451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:33.398461 | orchestrator | 2026-02-09 04:15:33.398471 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-09 04:15:33.398482 | orchestrator | Monday 09 February 2026 04:15:32 +0000 (0:00:04.051) 0:00:46.135 ******* 2026-02-09 04:15:33.398500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 04:15:33.531555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:15:33.531700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 04:15:33.531721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 04:15:33.531733 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:15:33.531746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 04:15:33.531757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:15:33.531790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 04:15:33.531865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 04:15:33.531873 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:15:33.531885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 04:15:33.531893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:15:33.531899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 04:15:33.531906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 04:15:33.531912 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:15:33.531923 | orchestrator | 2026-02-09 04:15:33.531930 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-09 04:15:33.531943 | orchestrator | Monday 09 February 2026 04:15:33 +0000 (0:00:01.046) 0:00:47.181 ******* 2026-02-09 04:15:34.120777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 04:15:34.120923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:15:34.120945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 04:15:34.120961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 04:15:34.120976 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:15:34.120990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 04:15:34.121040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:15:34.121055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 04:15:34.121063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 04:15:34.121071 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:15:34.121078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 04:15:34.121086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:15:34.121099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 04:15:38.785346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 04:15:38.785458 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:15:38.785475 | orchestrator | 2026-02-09 04:15:38.785489 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-09 04:15:38.785518 | orchestrator | Monday 09 February 2026 04:15:34 +0000 (0:00:00.913) 0:00:48.094 ******* 2026-02-09 04:15:38.785532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:38.785545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:38.785558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:38.785609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:38.785626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:38.785642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:38.785654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:38.785667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:38.785677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:38.785702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:52.535673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:52.535768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:52.535825 | orchestrator | 2026-02-09 04:15:52.535833 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-09 04:15:52.535839 | orchestrator | Monday 09 February 2026 04:15:38 +0000 (0:00:04.476) 0:00:52.570 ******* 2026-02-09 04:15:52.535844 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-09 04:15:52.535850 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-09 04:15:52.535855 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-09 04:15:52.535859 | orchestrator | 2026-02-09 04:15:52.535864 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-09 04:15:52.535868 | orchestrator | Monday 09 February 2026 04:15:41 +0000 (0:00:02.099) 0:00:54.671 ******* 2026-02-09 04:15:52.535874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:52.535894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:52.535910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:52.535920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:52.535925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:52.535930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:52.535939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:52.535945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:52.535954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:55.056202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:55.056281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:55.056291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:55.056318 | orchestrator | 2026-02-09 04:15:55.056327 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-09 04:15:55.056336 | orchestrator | Monday 09 February 2026 04:15:52 +0000 (0:00:11.618) 0:01:06.289 ******* 2026-02-09 04:15:55.056341 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:15:55.056349 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:15:55.056355 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:15:55.056361 | orchestrator | 2026-02-09 04:15:55.056367 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-09 04:15:55.056374 | orchestrator | Monday 09 February 2026 04:15:54 +0000 (0:00:01.560) 0:01:07.850 ******* 2026-02-09 04:15:55.056379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 04:15:55.056386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:15:55.056406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 04:15:55.056413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 04:15:55.056421 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:15:55.056425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 04:15:55.056429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:15:55.056433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 04:15:55.056445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 04:15:58.690327 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:15:58.690423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-09 04:15:58.690456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:15:58.690466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 04:15:58.690475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 04:15:58.690483 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:15:58.690491 | orchestrator | 2026-02-09 04:15:58.690500 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-09 04:15:58.690508 | orchestrator | Monday 09 February 2026 04:15:55 +0000 (0:00:00.948) 0:01:08.799 ******* 2026-02-09 04:15:58.690516 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:15:58.690523 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:15:58.690530 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:15:58.690537 | orchestrator | 2026-02-09 04:15:58.690557 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-09 04:15:58.690564 | orchestrator | Monday 09 February 2026 04:15:55 +0000 (0:00:00.648) 0:01:09.448 ******* 2026-02-09 04:15:58.690600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:58.690610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:58.690624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-09 04:15:58.690632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:58.690640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:58.690647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:15:58.690665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:17:37.630127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:17:37.630213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-09 04:17:37.630222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:17:37.630228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:17:37.630252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-09 04:17:37.630272 | orchestrator | 2026-02-09 04:17:37.630279 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-09 04:17:37.630286 | orchestrator | Monday 09 February 2026 04:15:58 +0000 (0:00:02.992) 0:01:12.440 ******* 2026-02-09 04:17:37.630291 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:17:37.630297 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:17:37.630302 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:17:37.630307 | orchestrator | 2026-02-09 04:17:37.630313 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-09 04:17:37.630318 | orchestrator | Monday 09 February 2026 04:15:59 +0000 (0:00:00.314) 0:01:12.755 ******* 2026-02-09 04:17:37.630323 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:17:37.630328 | orchestrator | 2026-02-09 04:17:37.630345 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-09 04:17:37.630351 | orchestrator | Monday 09 February 2026 04:16:01 +0000 (0:00:01.969) 0:01:14.724 ******* 2026-02-09 04:17:37.630356 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:17:37.630361 | orchestrator | 2026-02-09 04:17:37.630366 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-09 04:17:37.630371 | orchestrator | Monday 09 February 2026 04:16:03 +0000 (0:00:02.075) 0:01:16.800 ******* 2026-02-09 04:17:37.630377 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:17:37.630382 | orchestrator | 2026-02-09 04:17:37.630387 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-09 04:17:37.630392 | orchestrator | Monday 09 February 2026 04:16:20 +0000 (0:00:17.304) 0:01:34.104 ******* 2026-02-09 04:17:37.630397 | orchestrator | 2026-02-09 04:17:37.630402 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-09 04:17:37.630407 | orchestrator | Monday 09 February 2026 04:16:20 +0000 (0:00:00.080) 0:01:34.184 ******* 2026-02-09 04:17:37.630412 | orchestrator | 2026-02-09 04:17:37.630417 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-09 04:17:37.630422 | orchestrator | Monday 09 February 2026 04:16:20 +0000 (0:00:00.075) 0:01:34.260 ******* 2026-02-09 04:17:37.630427 | orchestrator | 2026-02-09 04:17:37.630432 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-09 04:17:37.630437 | orchestrator | Monday 09 February 2026 04:16:20 +0000 (0:00:00.078) 0:01:34.338 ******* 2026-02-09 04:17:37.630446 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:17:37.630454 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:17:37.630471 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:17:37.630478 | orchestrator | 2026-02-09 04:17:37.630486 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-09 04:17:37.630494 | orchestrator | Monday 09 February 2026 04:16:50 +0000 (0:00:30.264) 0:02:04.603 ******* 2026-02-09 04:17:37.630504 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:17:37.630512 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:17:37.630521 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:17:37.630529 | orchestrator | 2026-02-09 04:17:37.630537 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-09 04:17:37.630547 | orchestrator | Monday 09 February 2026 04:17:01 +0000 (0:00:10.219) 0:02:14.822 ******* 2026-02-09 04:17:37.630554 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:17:37.630559 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:17:37.630564 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:17:37.630569 | orchestrator | 2026-02-09 04:17:37.630574 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-09 04:17:37.630579 | orchestrator | Monday 09 February 2026 04:17:26 +0000 (0:00:25.534) 0:02:40.357 ******* 2026-02-09 04:17:37.630584 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:17:37.630589 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:17:37.630594 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:17:37.630599 | orchestrator | 2026-02-09 04:17:37.630604 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-09 04:17:37.630616 | orchestrator | Monday 09 February 2026 04:17:37 +0000 (0:00:10.619) 0:02:50.977 ******* 2026-02-09 04:17:37.630621 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:17:37.630626 | orchestrator | 2026-02-09 04:17:37.630632 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:17:37.630639 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-09 04:17:37.630647 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 04:17:37.630653 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 04:17:37.630659 | orchestrator | 2026-02-09 04:17:37.630665 | orchestrator | 2026-02-09 04:17:37.630671 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:17:37.630699 | orchestrator | Monday 09 February 2026 04:17:37 +0000 (0:00:00.287) 0:02:51.264 ******* 2026-02-09 04:17:37.630706 | orchestrator | =============================================================================== 2026-02-09 04:17:37.630712 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 30.26s 2026-02-09 04:17:37.630718 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 25.53s 2026-02-09 04:17:37.630724 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.30s 2026-02-09 04:17:37.630730 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.62s 2026-02-09 04:17:37.630736 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.62s 2026-02-09 04:17:37.630747 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.22s 2026-02-09 04:17:37.630753 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.81s 2026-02-09 04:17:37.630760 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.71s 2026-02-09 04:17:37.630769 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.48s 2026-02-09 04:17:37.630779 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.05s 2026-02-09 04:17:37.630791 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.84s 2026-02-09 04:17:37.630802 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.53s 2026-02-09 04:17:37.630810 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.99s 2026-02-09 04:17:37.630818 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 2.96s 2026-02-09 04:17:37.630834 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 2.92s 2026-02-09 04:17:38.037237 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.92s 2026-02-09 04:17:38.037343 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.78s 2026-02-09 04:17:38.037358 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.10s 2026-02-09 04:17:38.037370 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.08s 2026-02-09 04:17:38.037381 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.06s 2026-02-09 04:17:40.532123 | orchestrator | 2026-02-09 04:17:40 | INFO  | Task 1535a887-6b85-488e-b929-a64f4750bba2 (barbican) was prepared for execution. 2026-02-09 04:17:40.532196 | orchestrator | 2026-02-09 04:17:40 | INFO  | It takes a moment until task 1535a887-6b85-488e-b929-a64f4750bba2 (barbican) has been started and output is visible here. 2026-02-09 04:18:22.196760 | orchestrator | 2026-02-09 04:18:22.196858 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:18:22.196872 | orchestrator | 2026-02-09 04:18:22.196879 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:18:22.196900 | orchestrator | Monday 09 February 2026 04:17:45 +0000 (0:00:00.282) 0:00:00.282 ******* 2026-02-09 04:18:22.196905 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:18:22.196910 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:18:22.196914 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:18:22.196917 | orchestrator | 2026-02-09 04:18:22.196922 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:18:22.196926 | orchestrator | Monday 09 February 2026 04:17:45 +0000 (0:00:00.317) 0:00:00.600 ******* 2026-02-09 04:18:22.196930 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-09 04:18:22.196934 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-09 04:18:22.196938 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-09 04:18:22.196942 | orchestrator | 2026-02-09 04:18:22.196946 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-09 04:18:22.196949 | orchestrator | 2026-02-09 04:18:22.196954 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-09 04:18:22.196957 | orchestrator | Monday 09 February 2026 04:17:45 +0000 (0:00:00.439) 0:00:01.039 ******* 2026-02-09 04:18:22.196962 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:18:22.196966 | orchestrator | 2026-02-09 04:18:22.196970 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-09 04:18:22.196974 | orchestrator | Monday 09 February 2026 04:17:46 +0000 (0:00:00.601) 0:00:01.641 ******* 2026-02-09 04:18:22.196978 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-09 04:18:22.196982 | orchestrator | 2026-02-09 04:18:22.196985 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-09 04:18:22.196989 | orchestrator | Monday 09 February 2026 04:17:49 +0000 (0:00:03.324) 0:00:04.966 ******* 2026-02-09 04:18:22.196993 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-09 04:18:22.196997 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-09 04:18:22.197000 | orchestrator | 2026-02-09 04:18:22.197004 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-09 04:18:22.197008 | orchestrator | Monday 09 February 2026 04:17:55 +0000 (0:00:06.013) 0:00:10.979 ******* 2026-02-09 04:18:22.197012 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 04:18:22.197016 | orchestrator | 2026-02-09 04:18:22.197019 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-09 04:18:22.197023 | orchestrator | Monday 09 February 2026 04:17:58 +0000 (0:00:03.030) 0:00:14.011 ******* 2026-02-09 04:18:22.197027 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:18:22.197031 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-09 04:18:22.197035 | orchestrator | 2026-02-09 04:18:22.197039 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-09 04:18:22.197042 | orchestrator | Monday 09 February 2026 04:18:02 +0000 (0:00:03.797) 0:00:17.808 ******* 2026-02-09 04:18:22.197046 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 04:18:22.197050 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-09 04:18:22.197054 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-09 04:18:22.197057 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-09 04:18:22.197061 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-09 04:18:22.197065 | orchestrator | 2026-02-09 04:18:22.197079 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-09 04:18:22.197083 | orchestrator | Monday 09 February 2026 04:18:17 +0000 (0:00:14.485) 0:00:32.294 ******* 2026-02-09 04:18:22.197086 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-09 04:18:22.197094 | orchestrator | 2026-02-09 04:18:22.197098 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-09 04:18:22.197101 | orchestrator | Monday 09 February 2026 04:18:20 +0000 (0:00:03.558) 0:00:35.852 ******* 2026-02-09 04:18:22.197107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:22.197124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:22.197128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:22.197134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:22.197142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:22.197149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:22.197158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:27.966706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:27.966845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:27.966863 | orchestrator | 2026-02-09 04:18:27.966878 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-09 04:18:27.966890 | orchestrator | Monday 09 February 2026 04:18:22 +0000 (0:00:01.567) 0:00:37.420 ******* 2026-02-09 04:18:27.966902 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-09 04:18:27.966913 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-09 04:18:27.966924 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-09 04:18:27.966934 | orchestrator | 2026-02-09 04:18:27.966945 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-09 04:18:27.966956 | orchestrator | Monday 09 February 2026 04:18:23 +0000 (0:00:01.139) 0:00:38.559 ******* 2026-02-09 04:18:27.966967 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:18:27.966978 | orchestrator | 2026-02-09 04:18:27.966989 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-09 04:18:27.967000 | orchestrator | Monday 09 February 2026 04:18:23 +0000 (0:00:00.334) 0:00:38.894 ******* 2026-02-09 04:18:27.967035 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:18:27.967047 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:18:27.967057 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:18:27.967068 | orchestrator | 2026-02-09 04:18:27.967079 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-09 04:18:27.967090 | orchestrator | Monday 09 February 2026 04:18:23 +0000 (0:00:00.323) 0:00:39.217 ******* 2026-02-09 04:18:27.967101 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:18:27.967112 | orchestrator | 2026-02-09 04:18:27.967137 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-09 04:18:27.967148 | orchestrator | Monday 09 February 2026 04:18:24 +0000 (0:00:00.581) 0:00:39.798 ******* 2026-02-09 04:18:27.967163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:27.967199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:27.967214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:27.967228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:27.967257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:27.967270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:27.967284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:27.967306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:29.394107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:29.394224 | orchestrator | 2026-02-09 04:18:29.394240 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-09 04:18:29.394251 | orchestrator | Monday 09 February 2026 04:18:27 +0000 (0:00:03.390) 0:00:43.189 ******* 2026-02-09 04:18:29.394263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 04:18:29.394364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:18:29.394387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:18:29.394402 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:18:29.394419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 04:18:29.394457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:18:29.394474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:18:29.394501 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:18:29.394524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 04:18:29.394540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:18:29.394555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:18:29.394569 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:18:29.394583 | orchestrator | 2026-02-09 04:18:29.394600 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-09 04:18:29.394615 | orchestrator | Monday 09 February 2026 04:18:28 +0000 (0:00:00.625) 0:00:43.815 ******* 2026-02-09 04:18:29.394667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 04:18:32.801555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:18:32.801720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:18:32.801737 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:18:32.801765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 04:18:32.801777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:18:32.801786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:18:32.801795 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:18:32.801819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 04:18:32.801852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:18:32.801892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:18:32.801903 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:18:32.801912 | orchestrator | 2026-02-09 04:18:32.801922 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-09 04:18:32.801931 | orchestrator | Monday 09 February 2026 04:18:29 +0000 (0:00:00.810) 0:00:44.625 ******* 2026-02-09 04:18:32.801941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:32.801951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:32.801975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:42.494233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:42.494384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:42.494402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:42.494412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:42.494424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:42.494455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:42.494465 | orchestrator | 2026-02-09 04:18:42.494476 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-09 04:18:42.494487 | orchestrator | Monday 09 February 2026 04:18:32 +0000 (0:00:03.401) 0:00:48.026 ******* 2026-02-09 04:18:42.494496 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:18:42.494507 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:18:42.494524 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:18:42.494533 | orchestrator | 2026-02-09 04:18:42.494559 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-09 04:18:42.494569 | orchestrator | Monday 09 February 2026 04:18:34 +0000 (0:00:01.582) 0:00:49.608 ******* 2026-02-09 04:18:42.494578 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:18:42.494587 | orchestrator | 2026-02-09 04:18:42.494596 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-09 04:18:42.494605 | orchestrator | Monday 09 February 2026 04:18:35 +0000 (0:00:00.981) 0:00:50.590 ******* 2026-02-09 04:18:42.494614 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:18:42.494670 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:18:42.494680 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:18:42.494690 | orchestrator | 2026-02-09 04:18:42.494699 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-09 04:18:42.494708 | orchestrator | Monday 09 February 2026 04:18:35 +0000 (0:00:00.581) 0:00:51.172 ******* 2026-02-09 04:18:42.494811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:42.494834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:42.494861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:42.494886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:43.402108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:43.402226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:43.402244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:43.402259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:43.402292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:43.402306 | orchestrator | 2026-02-09 04:18:43.402319 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-09 04:18:43.402332 | orchestrator | Monday 09 February 2026 04:18:42 +0000 (0:00:06.556) 0:00:57.728 ******* 2026-02-09 04:18:43.402363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 04:18:43.402376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:18:43.402395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:18:43.402407 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:18:43.402420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 04:18:43.402443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:18:43.402455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:18:43.402466 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:18:43.402486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-09 04:18:45.712131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:18:45.712255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:18:45.712311 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:18:45.712326 | orchestrator | 2026-02-09 04:18:45.712336 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-09 04:18:45.712347 | orchestrator | Monday 09 February 2026 04:18:43 +0000 (0:00:00.900) 0:00:58.629 ******* 2026-02-09 04:18:45.712357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:45.712368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:45.712396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-09 04:18:45.712414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:45.712432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:45.712441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:45.712450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:45.712459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:45.712471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:18:45.712486 | orchestrator | 2026-02-09 04:18:45.712501 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-09 04:18:45.712524 | orchestrator | Monday 09 February 2026 04:18:45 +0000 (0:00:02.312) 0:01:00.942 ******* 2026-02-09 04:19:28.247437 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:19:28.247531 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:19:28.247541 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:19:28.247549 | orchestrator | 2026-02-09 04:19:28.247558 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-09 04:19:28.247566 | orchestrator | Monday 09 February 2026 04:18:46 +0000 (0:00:00.429) 0:01:01.371 ******* 2026-02-09 04:19:28.247678 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:19:28.247688 | orchestrator | 2026-02-09 04:19:28.247695 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-09 04:19:28.247702 | orchestrator | Monday 09 February 2026 04:18:48 +0000 (0:00:02.013) 0:01:03.385 ******* 2026-02-09 04:19:28.247709 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:19:28.247717 | orchestrator | 2026-02-09 04:19:28.247724 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-09 04:19:28.247731 | orchestrator | Monday 09 February 2026 04:18:50 +0000 (0:00:02.129) 0:01:05.514 ******* 2026-02-09 04:19:28.247739 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:19:28.247746 | orchestrator | 2026-02-09 04:19:28.247753 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-09 04:19:28.247760 | orchestrator | Monday 09 February 2026 04:19:02 +0000 (0:00:11.807) 0:01:17.322 ******* 2026-02-09 04:19:28.247776 | orchestrator | 2026-02-09 04:19:28.247791 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-09 04:19:28.247798 | orchestrator | Monday 09 February 2026 04:19:02 +0000 (0:00:00.071) 0:01:17.393 ******* 2026-02-09 04:19:28.247806 | orchestrator | 2026-02-09 04:19:28.247813 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-09 04:19:28.247820 | orchestrator | Monday 09 February 2026 04:19:02 +0000 (0:00:00.069) 0:01:17.462 ******* 2026-02-09 04:19:28.247826 | orchestrator | 2026-02-09 04:19:28.247833 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-09 04:19:28.247840 | orchestrator | Monday 09 February 2026 04:19:02 +0000 (0:00:00.076) 0:01:17.539 ******* 2026-02-09 04:19:28.247847 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:19:28.247854 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:19:28.247860 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:19:28.247867 | orchestrator | 2026-02-09 04:19:28.247874 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-09 04:19:28.247880 | orchestrator | Monday 09 February 2026 04:19:13 +0000 (0:00:11.079) 0:01:28.619 ******* 2026-02-09 04:19:28.247887 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:19:28.247894 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:19:28.247901 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:19:28.247908 | orchestrator | 2026-02-09 04:19:28.247915 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-09 04:19:28.247922 | orchestrator | Monday 09 February 2026 04:19:17 +0000 (0:00:04.323) 0:01:32.942 ******* 2026-02-09 04:19:28.247929 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:19:28.247936 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:19:28.247942 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:19:28.247949 | orchestrator | 2026-02-09 04:19:28.247956 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:19:28.247963 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 04:19:28.247971 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 04:19:28.247979 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 04:19:28.247986 | orchestrator | 2026-02-09 04:19:28.247993 | orchestrator | 2026-02-09 04:19:28.248000 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:19:28.248007 | orchestrator | Monday 09 February 2026 04:19:27 +0000 (0:00:10.133) 0:01:43.076 ******* 2026-02-09 04:19:28.248014 | orchestrator | =============================================================================== 2026-02-09 04:19:28.248021 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.49s 2026-02-09 04:19:28.248028 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.81s 2026-02-09 04:19:28.248041 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.08s 2026-02-09 04:19:28.248048 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.13s 2026-02-09 04:19:28.248054 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.56s 2026-02-09 04:19:28.248061 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.01s 2026-02-09 04:19:28.248069 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.32s 2026-02-09 04:19:28.248076 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.80s 2026-02-09 04:19:28.248083 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.56s 2026-02-09 04:19:28.248090 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.40s 2026-02-09 04:19:28.248097 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.39s 2026-02-09 04:19:28.248104 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.32s 2026-02-09 04:19:28.248111 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.03s 2026-02-09 04:19:28.248118 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.31s 2026-02-09 04:19:28.248125 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.13s 2026-02-09 04:19:28.248146 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.01s 2026-02-09 04:19:28.248155 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.58s 2026-02-09 04:19:28.248162 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.57s 2026-02-09 04:19:28.248169 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.14s 2026-02-09 04:19:28.248180 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.98s 2026-02-09 04:19:30.817475 | orchestrator | 2026-02-09 04:19:30 | INFO  | Task 9973c311-8ae5-4fa7-8987-24be55ae75db (designate) was prepared for execution. 2026-02-09 04:19:30.817741 | orchestrator | 2026-02-09 04:19:30 | INFO  | It takes a moment until task 9973c311-8ae5-4fa7-8987-24be55ae75db (designate) has been started and output is visible here. 2026-02-09 04:20:01.290652 | orchestrator | 2026-02-09 04:20:01.290740 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:20:01.290750 | orchestrator | 2026-02-09 04:20:01.290756 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:20:01.290762 | orchestrator | Monday 09 February 2026 04:19:35 +0000 (0:00:00.264) 0:00:00.264 ******* 2026-02-09 04:20:01.290768 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:20:01.290774 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:20:01.290780 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:20:01.290785 | orchestrator | 2026-02-09 04:20:01.290791 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:20:01.290796 | orchestrator | Monday 09 February 2026 04:19:35 +0000 (0:00:00.344) 0:00:00.609 ******* 2026-02-09 04:20:01.290802 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-09 04:20:01.290808 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-09 04:20:01.290813 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-09 04:20:01.290818 | orchestrator | 2026-02-09 04:20:01.290823 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-09 04:20:01.290829 | orchestrator | 2026-02-09 04:20:01.290834 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-09 04:20:01.290839 | orchestrator | Monday 09 February 2026 04:19:36 +0000 (0:00:00.526) 0:00:01.136 ******* 2026-02-09 04:20:01.290845 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:20:01.290851 | orchestrator | 2026-02-09 04:20:01.290856 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-09 04:20:01.290878 | orchestrator | Monday 09 February 2026 04:19:36 +0000 (0:00:00.608) 0:00:01.744 ******* 2026-02-09 04:20:01.290886 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-09 04:20:01.290895 | orchestrator | 2026-02-09 04:20:01.290903 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-09 04:20:01.290911 | orchestrator | Monday 09 February 2026 04:19:39 +0000 (0:00:03.276) 0:00:05.020 ******* 2026-02-09 04:20:01.290918 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-09 04:20:01.290927 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-09 04:20:01.290935 | orchestrator | 2026-02-09 04:20:01.290944 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-09 04:20:01.290953 | orchestrator | Monday 09 February 2026 04:19:45 +0000 (0:00:05.942) 0:00:10.962 ******* 2026-02-09 04:20:01.290961 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 04:20:01.290969 | orchestrator | 2026-02-09 04:20:01.290978 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-09 04:20:01.290983 | orchestrator | Monday 09 February 2026 04:19:48 +0000 (0:00:02.989) 0:00:13.952 ******* 2026-02-09 04:20:01.290989 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:20:01.290994 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-09 04:20:01.290999 | orchestrator | 2026-02-09 04:20:01.291004 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-09 04:20:01.291009 | orchestrator | Monday 09 February 2026 04:19:52 +0000 (0:00:03.844) 0:00:17.797 ******* 2026-02-09 04:20:01.291014 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 04:20:01.291020 | orchestrator | 2026-02-09 04:20:01.291025 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-09 04:20:01.291030 | orchestrator | Monday 09 February 2026 04:19:55 +0000 (0:00:03.038) 0:00:20.835 ******* 2026-02-09 04:20:01.291036 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-09 04:20:01.291041 | orchestrator | 2026-02-09 04:20:01.291046 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-09 04:20:01.291051 | orchestrator | Monday 09 February 2026 04:19:59 +0000 (0:00:03.471) 0:00:24.306 ******* 2026-02-09 04:20:01.291060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:01.291100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:01.291118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:01.291129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:01.291140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:01.291149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:01.291163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:01.291177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:07.590534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:07.590696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:07.590716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:07.590728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:07.590740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:07.590771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:07.590821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:07.590835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:07.590847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:07.590858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:07.590870 | orchestrator | 2026-02-09 04:20:07.590884 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-09 04:20:07.590897 | orchestrator | Monday 09 February 2026 04:20:02 +0000 (0:00:02.832) 0:00:27.139 ******* 2026-02-09 04:20:07.590908 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:20:07.590920 | orchestrator | 2026-02-09 04:20:07.590931 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-09 04:20:07.590942 | orchestrator | Monday 09 February 2026 04:20:02 +0000 (0:00:00.138) 0:00:27.278 ******* 2026-02-09 04:20:07.590953 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:20:07.590964 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:20:07.590975 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:20:07.590987 | orchestrator | 2026-02-09 04:20:07.590998 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-09 04:20:07.591009 | orchestrator | Monday 09 February 2026 04:20:02 +0000 (0:00:00.578) 0:00:27.857 ******* 2026-02-09 04:20:07.591021 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:20:07.591032 | orchestrator | 2026-02-09 04:20:07.591043 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-09 04:20:07.591055 | orchestrator | Monday 09 February 2026 04:20:03 +0000 (0:00:00.633) 0:00:28.490 ******* 2026-02-09 04:20:07.591075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:07.591109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:09.364505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:09.364642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:09.364652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:09.364687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:09.364693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:09.364711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:09.364716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:09.364721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:09.364727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:09.364732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:09.364743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:09.364749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:09.364759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:10.330739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:10.330836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:10.330847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:10.330878 | orchestrator | 2026-02-09 04:20:10.330887 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-09 04:20:10.330895 | orchestrator | Monday 09 February 2026 04:20:09 +0000 (0:00:05.918) 0:00:34.409 ******* 2026-02-09 04:20:10.330927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:10.330936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 04:20:10.330961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:10.330969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:10.330976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:10.330983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:20:10.330995 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:20:10.331007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:10.331015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 04:20:10.331021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:10.331033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.138985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.139069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.139098 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:20:11.139146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:11.139161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 04:20:11.139173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.139180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.139202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.139215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.139221 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:20:11.139228 | orchestrator | 2026-02-09 04:20:11.139235 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-09 04:20:11.139242 | orchestrator | Monday 09 February 2026 04:20:10 +0000 (0:00:01.086) 0:00:35.495 ******* 2026-02-09 04:20:11.139253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:11.139260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 04:20:11.139266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.139277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.512994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.513162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.513191 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:20:11.513235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:11.513253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 04:20:11.513271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.513287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.513329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.513362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.513383 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:20:11.513409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:11.513429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 04:20:11.513448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.513467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:11.513510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:15.705270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:20:15.705371 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:20:15.705384 | orchestrator | 2026-02-09 04:20:15.705393 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-09 04:20:15.705402 | orchestrator | Monday 09 February 2026 04:20:11 +0000 (0:00:01.060) 0:00:36.556 ******* 2026-02-09 04:20:15.705423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:15.705431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:15.705436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:15.705469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:15.705477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:15.705481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:15.705489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:15.705494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:15.705500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:15.705509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:15.705519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:28.199479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:28.199665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:28.199686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:28.199699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:28.199731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:28.199744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:28.199776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:28.199790 | orchestrator | 2026-02-09 04:20:28.199804 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-09 04:20:28.199816 | orchestrator | Monday 09 February 2026 04:20:17 +0000 (0:00:05.968) 0:00:42.525 ******* 2026-02-09 04:20:28.199834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:28.199848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:28.199860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:28.199880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:28.199901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:37.187713 | orchestrator | 2026-02-09 04:20:37.187721 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-09 04:20:37.187729 | orchestrator | Monday 09 February 2026 04:20:33 +0000 (0:00:15.633) 0:00:58.159 ******* 2026-02-09 04:20:37.187740 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-09 04:20:41.829783 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-09 04:20:41.829869 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-09 04:20:41.829879 | orchestrator | 2026-02-09 04:20:41.829886 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-09 04:20:41.829893 | orchestrator | Monday 09 February 2026 04:20:37 +0000 (0:00:04.070) 0:01:02.229 ******* 2026-02-09 04:20:41.829900 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-09 04:20:41.829906 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-09 04:20:41.829913 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-09 04:20:41.829919 | orchestrator | 2026-02-09 04:20:41.829925 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-09 04:20:41.829931 | orchestrator | Monday 09 February 2026 04:20:39 +0000 (0:00:02.692) 0:01:04.922 ******* 2026-02-09 04:20:41.829952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:41.829979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:41.829987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:41.830006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:41.830056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:41.830069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:41.830083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:41.830090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:41.830096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:41.830103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:41.830115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:44.927662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:44.927824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:44.927856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:44.927877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:44.927897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:44.927910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:44.927941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:44.927954 | orchestrator | 2026-02-09 04:20:44.927967 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-09 04:20:44.927990 | orchestrator | Monday 09 February 2026 04:20:42 +0000 (0:00:03.052) 0:01:07.975 ******* 2026-02-09 04:20:44.928010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:44.928024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:44.928036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:44.928047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:44.928065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:45.906774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:45.906852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:45.906861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:45.906868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:45.906874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:45.906880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:45.906899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:45.906923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:45.906929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:45.906935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:45.906941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:45.906947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:45.906953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:45.906964 | orchestrator | 2026-02-09 04:20:45.906971 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-09 04:20:45.906981 | orchestrator | Monday 09 February 2026 04:20:45 +0000 (0:00:02.974) 0:01:10.950 ******* 2026-02-09 04:20:47.101516 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:20:47.102259 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:20:47.102279 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:20:47.102287 | orchestrator | 2026-02-09 04:20:47.102296 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-09 04:20:47.102305 | orchestrator | Monday 09 February 2026 04:20:46 +0000 (0:00:00.336) 0:01:11.286 ******* 2026-02-09 04:20:47.102326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:47.102335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 04:20:47.102342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:47.102347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:47.102352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:47.102384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:20:47.102388 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:20:47.102395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:47.102399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 04:20:47.102403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:47.102407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:47.102411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:47.102422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:20:50.579410 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:20:50.579571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-09 04:20:50.579589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 04:20:50.579596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 04:20:50.579602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 04:20:50.579624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 04:20:50.579629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:20:50.579634 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:20:50.579639 | orchestrator | 2026-02-09 04:20:50.579658 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-09 04:20:50.579667 | orchestrator | Monday 09 February 2026 04:20:47 +0000 (0:00:00.973) 0:01:12.260 ******* 2026-02-09 04:20:50.579679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:50.579688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:50.579695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-09 04:20:50.579707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:50.579720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:20:52.446489 | orchestrator | 2026-02-09 04:20:52.446496 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-09 04:20:52.446505 | orchestrator | Monday 09 February 2026 04:20:51 +0000 (0:00:04.629) 0:01:16.889 ******* 2026-02-09 04:20:52.446511 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:20:52.446568 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:22:08.222862 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:22:08.222935 | orchestrator | 2026-02-09 04:22:08.222942 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-09 04:22:08.222948 | orchestrator | Monday 09 February 2026 04:20:52 +0000 (0:00:00.603) 0:01:17.493 ******* 2026-02-09 04:22:08.222953 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-09 04:22:08.222957 | orchestrator | 2026-02-09 04:22:08.222972 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-09 04:22:08.222977 | orchestrator | Monday 09 February 2026 04:20:54 +0000 (0:00:01.927) 0:01:19.420 ******* 2026-02-09 04:22:08.222981 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-09 04:22:08.222985 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-09 04:22:08.222990 | orchestrator | 2026-02-09 04:22:08.222994 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-09 04:22:08.222998 | orchestrator | Monday 09 February 2026 04:20:56 +0000 (0:00:02.185) 0:01:21.606 ******* 2026-02-09 04:22:08.223002 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:22:08.223005 | orchestrator | 2026-02-09 04:22:08.223009 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-09 04:22:08.223013 | orchestrator | Monday 09 February 2026 04:21:11 +0000 (0:00:14.710) 0:01:36.317 ******* 2026-02-09 04:22:08.223017 | orchestrator | 2026-02-09 04:22:08.223021 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-09 04:22:08.223025 | orchestrator | Monday 09 February 2026 04:21:11 +0000 (0:00:00.070) 0:01:36.388 ******* 2026-02-09 04:22:08.223028 | orchestrator | 2026-02-09 04:22:08.223032 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-09 04:22:08.223036 | orchestrator | Monday 09 February 2026 04:21:11 +0000 (0:00:00.072) 0:01:36.461 ******* 2026-02-09 04:22:08.223053 | orchestrator | 2026-02-09 04:22:08.223057 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-09 04:22:08.223061 | orchestrator | Monday 09 February 2026 04:21:11 +0000 (0:00:00.073) 0:01:36.534 ******* 2026-02-09 04:22:08.223064 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:22:08.223069 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:22:08.223072 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:22:08.223076 | orchestrator | 2026-02-09 04:22:08.223080 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-09 04:22:08.223084 | orchestrator | Monday 09 February 2026 04:21:24 +0000 (0:00:12.779) 0:01:49.313 ******* 2026-02-09 04:22:08.223087 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:22:08.223091 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:22:08.223095 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:22:08.223099 | orchestrator | 2026-02-09 04:22:08.223102 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-09 04:22:08.223106 | orchestrator | Monday 09 February 2026 04:21:32 +0000 (0:00:08.496) 0:01:57.810 ******* 2026-02-09 04:22:08.223110 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:22:08.223114 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:22:08.223117 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:22:08.223121 | orchestrator | 2026-02-09 04:22:08.223125 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-09 04:22:08.223129 | orchestrator | Monday 09 February 2026 04:21:38 +0000 (0:00:05.835) 0:02:03.645 ******* 2026-02-09 04:22:08.223132 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:22:08.223136 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:22:08.223140 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:22:08.223143 | orchestrator | 2026-02-09 04:22:08.223147 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-09 04:22:08.223151 | orchestrator | Monday 09 February 2026 04:21:44 +0000 (0:00:05.727) 0:02:09.372 ******* 2026-02-09 04:22:08.223155 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:22:08.223159 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:22:08.223162 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:22:08.223166 | orchestrator | 2026-02-09 04:22:08.223170 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-09 04:22:08.223174 | orchestrator | Monday 09 February 2026 04:21:55 +0000 (0:00:10.782) 0:02:20.155 ******* 2026-02-09 04:22:08.223177 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:22:08.223181 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:22:08.223185 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:22:08.223189 | orchestrator | 2026-02-09 04:22:08.223192 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-09 04:22:08.223196 | orchestrator | Monday 09 February 2026 04:22:00 +0000 (0:00:05.879) 0:02:26.034 ******* 2026-02-09 04:22:08.223200 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:22:08.223204 | orchestrator | 2026-02-09 04:22:08.223207 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:22:08.223212 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 04:22:08.223216 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 04:22:08.223220 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 04:22:08.223224 | orchestrator | 2026-02-09 04:22:08.223228 | orchestrator | 2026-02-09 04:22:08.223231 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:22:08.223235 | orchestrator | Monday 09 February 2026 04:22:07 +0000 (0:00:06.782) 0:02:32.817 ******* 2026-02-09 04:22:08.223239 | orchestrator | =============================================================================== 2026-02-09 04:22:08.223247 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.63s 2026-02-09 04:22:08.223251 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.71s 2026-02-09 04:22:08.223265 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.78s 2026-02-09 04:22:08.223269 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.78s 2026-02-09 04:22:08.223273 | orchestrator | designate : Restart designate-api container ----------------------------- 8.50s 2026-02-09 04:22:08.223277 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.78s 2026-02-09 04:22:08.223284 | orchestrator | designate : Copying over config.json files for services ----------------- 5.97s 2026-02-09 04:22:08.223288 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 5.94s 2026-02-09 04:22:08.223292 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.92s 2026-02-09 04:22:08.223295 | orchestrator | designate : Restart designate-worker container -------------------------- 5.88s 2026-02-09 04:22:08.223299 | orchestrator | designate : Restart designate-central container ------------------------- 5.84s 2026-02-09 04:22:08.223303 | orchestrator | designate : Restart designate-producer container ------------------------ 5.73s 2026-02-09 04:22:08.223306 | orchestrator | designate : Check designate containers ---------------------------------- 4.63s 2026-02-09 04:22:08.223310 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.07s 2026-02-09 04:22:08.223314 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.84s 2026-02-09 04:22:08.223318 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.47s 2026-02-09 04:22:08.223322 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.28s 2026-02-09 04:22:08.223326 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.05s 2026-02-09 04:22:08.223329 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.04s 2026-02-09 04:22:08.223333 | orchestrator | service-ks-register : designate | Creating projects --------------------- 2.99s 2026-02-09 04:22:10.813349 | orchestrator | 2026-02-09 04:22:10 | INFO  | Task 9c5643be-493c-4e3d-b906-d33f77073df5 (octavia) was prepared for execution. 2026-02-09 04:22:10.813430 | orchestrator | 2026-02-09 04:22:10 | INFO  | It takes a moment until task 9c5643be-493c-4e3d-b906-d33f77073df5 (octavia) has been started and output is visible here. 2026-02-09 04:24:09.771967 | orchestrator | 2026-02-09 04:24:09.772104 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:24:09.772131 | orchestrator | 2026-02-09 04:24:09.772153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:24:09.772174 | orchestrator | Monday 09 February 2026 04:22:15 +0000 (0:00:00.294) 0:00:00.294 ******* 2026-02-09 04:24:09.772193 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:24:09.772214 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:24:09.772232 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:24:09.772244 | orchestrator | 2026-02-09 04:24:09.772255 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:24:09.772266 | orchestrator | Monday 09 February 2026 04:22:15 +0000 (0:00:00.340) 0:00:00.634 ******* 2026-02-09 04:24:09.772277 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-09 04:24:09.772289 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-09 04:24:09.772299 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-09 04:24:09.772310 | orchestrator | 2026-02-09 04:24:09.772321 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-09 04:24:09.772333 | orchestrator | 2026-02-09 04:24:09.772344 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-09 04:24:09.772355 | orchestrator | Monday 09 February 2026 04:22:16 +0000 (0:00:00.494) 0:00:01.129 ******* 2026-02-09 04:24:09.772427 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:24:09.772442 | orchestrator | 2026-02-09 04:24:09.772453 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-09 04:24:09.772464 | orchestrator | Monday 09 February 2026 04:22:16 +0000 (0:00:00.637) 0:00:01.766 ******* 2026-02-09 04:24:09.772475 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-09 04:24:09.772486 | orchestrator | 2026-02-09 04:24:09.772496 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-09 04:24:09.772507 | orchestrator | Monday 09 February 2026 04:22:20 +0000 (0:00:03.233) 0:00:05.000 ******* 2026-02-09 04:24:09.772518 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-09 04:24:09.772529 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-09 04:24:09.772540 | orchestrator | 2026-02-09 04:24:09.772551 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-09 04:24:09.772561 | orchestrator | Monday 09 February 2026 04:22:26 +0000 (0:00:06.020) 0:00:11.020 ******* 2026-02-09 04:24:09.772572 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 04:24:09.772583 | orchestrator | 2026-02-09 04:24:09.772594 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-09 04:24:09.772604 | orchestrator | Monday 09 February 2026 04:22:29 +0000 (0:00:02.971) 0:00:13.992 ******* 2026-02-09 04:24:09.772616 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:24:09.772626 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-09 04:24:09.772638 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-09 04:24:09.772648 | orchestrator | 2026-02-09 04:24:09.772659 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-09 04:24:09.772669 | orchestrator | Monday 09 February 2026 04:22:36 +0000 (0:00:07.762) 0:00:21.755 ******* 2026-02-09 04:24:09.772680 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 04:24:09.772691 | orchestrator | 2026-02-09 04:24:09.772701 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-09 04:24:09.772712 | orchestrator | Monday 09 February 2026 04:22:39 +0000 (0:00:03.186) 0:00:24.942 ******* 2026-02-09 04:24:09.772738 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-09 04:24:09.772749 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-09 04:24:09.772760 | orchestrator | 2026-02-09 04:24:09.772770 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-09 04:24:09.772789 | orchestrator | Monday 09 February 2026 04:22:46 +0000 (0:00:06.680) 0:00:31.622 ******* 2026-02-09 04:24:09.772816 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-09 04:24:09.772836 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-09 04:24:09.772852 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-09 04:24:09.772870 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-09 04:24:09.772887 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-09 04:24:09.772903 | orchestrator | 2026-02-09 04:24:09.772919 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-09 04:24:09.772935 | orchestrator | Monday 09 February 2026 04:23:01 +0000 (0:00:14.401) 0:00:46.024 ******* 2026-02-09 04:24:09.772953 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:24:09.772972 | orchestrator | 2026-02-09 04:24:09.772992 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-09 04:24:09.773010 | orchestrator | Monday 09 February 2026 04:23:01 +0000 (0:00:00.851) 0:00:46.876 ******* 2026-02-09 04:24:09.773047 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:24:09.773059 | orchestrator | 2026-02-09 04:24:09.773070 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-09 04:24:09.773081 | orchestrator | Monday 09 February 2026 04:23:06 +0000 (0:00:04.515) 0:00:51.391 ******* 2026-02-09 04:24:09.773092 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:24:09.773103 | orchestrator | 2026-02-09 04:24:09.773117 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-09 04:24:09.773167 | orchestrator | Monday 09 February 2026 04:23:10 +0000 (0:00:03.640) 0:00:55.032 ******* 2026-02-09 04:24:09.773192 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:24:09.773210 | orchestrator | 2026-02-09 04:24:09.773227 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-09 04:24:09.773244 | orchestrator | Monday 09 February 2026 04:23:13 +0000 (0:00:02.968) 0:00:58.001 ******* 2026-02-09 04:24:09.773261 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-09 04:24:09.773281 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-09 04:24:09.773300 | orchestrator | 2026-02-09 04:24:09.773317 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-09 04:24:09.773337 | orchestrator | Monday 09 February 2026 04:23:22 +0000 (0:00:09.963) 0:01:07.964 ******* 2026-02-09 04:24:09.773356 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-09 04:24:09.773369 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-09 04:24:09.773511 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-09 04:24:09.773533 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-09 04:24:09.773553 | orchestrator | 2026-02-09 04:24:09.773572 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-09 04:24:09.773584 | orchestrator | Monday 09 February 2026 04:23:37 +0000 (0:00:14.526) 0:01:22.491 ******* 2026-02-09 04:24:09.773595 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:24:09.773610 | orchestrator | 2026-02-09 04:24:09.773621 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-09 04:24:09.773632 | orchestrator | Monday 09 February 2026 04:23:41 +0000 (0:00:04.307) 0:01:26.799 ******* 2026-02-09 04:24:09.773643 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:24:09.773654 | orchestrator | 2026-02-09 04:24:09.773665 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-09 04:24:09.773676 | orchestrator | Monday 09 February 2026 04:23:46 +0000 (0:00:05.073) 0:01:31.872 ******* 2026-02-09 04:24:09.773687 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:24:09.773697 | orchestrator | 2026-02-09 04:24:09.773708 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-09 04:24:09.773719 | orchestrator | Monday 09 February 2026 04:23:47 +0000 (0:00:00.271) 0:01:32.143 ******* 2026-02-09 04:24:09.773730 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:24:09.773741 | orchestrator | 2026-02-09 04:24:09.773752 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-09 04:24:09.773762 | orchestrator | Monday 09 February 2026 04:23:51 +0000 (0:00:04.139) 0:01:36.283 ******* 2026-02-09 04:24:09.773773 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:24:09.773784 | orchestrator | 2026-02-09 04:24:09.773795 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-09 04:24:09.773806 | orchestrator | Monday 09 February 2026 04:23:52 +0000 (0:00:01.200) 0:01:37.483 ******* 2026-02-09 04:24:09.773816 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:24:09.773827 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:24:09.773850 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:24:09.773861 | orchestrator | 2026-02-09 04:24:09.773872 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-09 04:24:09.773883 | orchestrator | Monday 09 February 2026 04:23:57 +0000 (0:00:05.335) 0:01:42.819 ******* 2026-02-09 04:24:09.773893 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:24:09.773904 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:24:09.773924 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:24:09.773935 | orchestrator | 2026-02-09 04:24:09.773946 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-09 04:24:09.773957 | orchestrator | Monday 09 February 2026 04:24:02 +0000 (0:00:04.413) 0:01:47.232 ******* 2026-02-09 04:24:09.773967 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:24:09.773978 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:24:09.773989 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:24:09.774000 | orchestrator | 2026-02-09 04:24:09.774010 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-09 04:24:09.774094 | orchestrator | Monday 09 February 2026 04:24:03 +0000 (0:00:01.106) 0:01:48.338 ******* 2026-02-09 04:24:09.774106 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:24:09.774116 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:24:09.774127 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:24:09.774138 | orchestrator | 2026-02-09 04:24:09.774149 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-09 04:24:09.774160 | orchestrator | Monday 09 February 2026 04:24:05 +0000 (0:00:01.722) 0:01:50.061 ******* 2026-02-09 04:24:09.774171 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:24:09.774181 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:24:09.774192 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:24:09.774203 | orchestrator | 2026-02-09 04:24:09.774213 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-09 04:24:09.774224 | orchestrator | Monday 09 February 2026 04:24:06 +0000 (0:00:01.251) 0:01:51.312 ******* 2026-02-09 04:24:09.774235 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:24:09.774249 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:24:09.774271 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:24:09.774301 | orchestrator | 2026-02-09 04:24:09.774320 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-09 04:24:09.774337 | orchestrator | Monday 09 February 2026 04:24:07 +0000 (0:00:01.225) 0:01:52.537 ******* 2026-02-09 04:24:09.774355 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:24:09.774403 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:24:09.774421 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:24:09.774440 | orchestrator | 2026-02-09 04:24:09.774478 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-09 04:24:32.957952 | orchestrator | Monday 09 February 2026 04:24:09 +0000 (0:00:02.187) 0:01:54.724 ******* 2026-02-09 04:24:32.958107 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:24:32.958153 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:24:32.958160 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:24:32.958166 | orchestrator | 2026-02-09 04:24:32.958173 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-09 04:24:32.958179 | orchestrator | Monday 09 February 2026 04:24:11 +0000 (0:00:01.456) 0:01:56.181 ******* 2026-02-09 04:24:32.958185 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:24:32.958193 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:24:32.958200 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:24:32.958206 | orchestrator | 2026-02-09 04:24:32.958213 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-09 04:24:32.958220 | orchestrator | Monday 09 February 2026 04:24:11 +0000 (0:00:00.650) 0:01:56.831 ******* 2026-02-09 04:24:32.958227 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:24:32.958233 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:24:32.958239 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:24:32.958245 | orchestrator | 2026-02-09 04:24:32.958274 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-09 04:24:32.958281 | orchestrator | Monday 09 February 2026 04:24:14 +0000 (0:00:02.899) 0:01:59.731 ******* 2026-02-09 04:24:32.958288 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:24:32.958295 | orchestrator | 2026-02-09 04:24:32.958301 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-09 04:24:32.958307 | orchestrator | Monday 09 February 2026 04:24:15 +0000 (0:00:00.540) 0:02:00.271 ******* 2026-02-09 04:24:32.958313 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:24:32.958319 | orchestrator | 2026-02-09 04:24:32.958326 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-09 04:24:32.958332 | orchestrator | Monday 09 February 2026 04:24:18 +0000 (0:00:03.187) 0:02:03.458 ******* 2026-02-09 04:24:32.958338 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:24:32.958343 | orchestrator | 2026-02-09 04:24:32.958349 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-09 04:24:32.958355 | orchestrator | Monday 09 February 2026 04:24:21 +0000 (0:00:02.667) 0:02:06.126 ******* 2026-02-09 04:24:32.958428 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-09 04:24:32.958435 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-09 04:24:32.958443 | orchestrator | 2026-02-09 04:24:32.958449 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-09 04:24:32.958455 | orchestrator | Monday 09 February 2026 04:24:27 +0000 (0:00:05.970) 0:02:12.097 ******* 2026-02-09 04:24:32.958461 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:24:32.958466 | orchestrator | 2026-02-09 04:24:32.958476 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-09 04:24:32.958482 | orchestrator | Monday 09 February 2026 04:24:30 +0000 (0:00:03.187) 0:02:15.284 ******* 2026-02-09 04:24:32.958488 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:24:32.958494 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:24:32.958501 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:24:32.958507 | orchestrator | 2026-02-09 04:24:32.958513 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-09 04:24:32.958519 | orchestrator | Monday 09 February 2026 04:24:30 +0000 (0:00:00.585) 0:02:15.870 ******* 2026-02-09 04:24:32.958543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:24:32.958572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:24:32.958589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:24:32.958596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:24:32.958605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:24:32.958611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:24:32.958623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:32.958632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:32.958654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:34.461738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:34.461835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:34.461850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:34.461877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:24:34.461889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:24:34.461899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:24:34.461931 | orchestrator | 2026-02-09 04:24:34.461944 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-09 04:24:34.461956 | orchestrator | Monday 09 February 2026 04:24:33 +0000 (0:00:02.469) 0:02:18.339 ******* 2026-02-09 04:24:34.461965 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:24:34.461977 | orchestrator | 2026-02-09 04:24:34.461986 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-09 04:24:34.461996 | orchestrator | Monday 09 February 2026 04:24:33 +0000 (0:00:00.141) 0:02:18.481 ******* 2026-02-09 04:24:34.462006 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:24:34.462093 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:24:34.462105 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:24:34.462115 | orchestrator | 2026-02-09 04:24:34.462125 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-09 04:24:34.462135 | orchestrator | Monday 09 February 2026 04:24:33 +0000 (0:00:00.316) 0:02:18.797 ******* 2026-02-09 04:24:34.462147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 04:24:34.462160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 04:24:34.462177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 04:24:34.462188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 04:24:34.462207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:24:34.462217 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:24:34.462236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 04:24:39.144565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 04:24:39.144660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 04:24:39.144687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 04:24:39.144697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:24:39.144742 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:24:39.144755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 04:24:39.144764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 04:24:39.144789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 04:24:39.144799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 04:24:39.144807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:24:39.144816 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:24:39.144825 | orchestrator | 2026-02-09 04:24:39.144845 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-09 04:24:39.144912 | orchestrator | Monday 09 February 2026 04:24:34 +0000 (0:00:00.751) 0:02:19.549 ******* 2026-02-09 04:24:39.144929 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:24:39.144943 | orchestrator | 2026-02-09 04:24:39.144957 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-09 04:24:39.144970 | orchestrator | Monday 09 February 2026 04:24:35 +0000 (0:00:00.681) 0:02:20.230 ******* 2026-02-09 04:24:39.144983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:24:39.144993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:24:39.145010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:24:40.727934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:24:40.728030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:24:40.728054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:24:40.728061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:40.728068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:40.728073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:40.728088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:40.728093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:40.728105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:40.728110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:24:40.728116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:24:40.728120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:24:40.728126 | orchestrator | 2026-02-09 04:24:40.728131 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-09 04:24:40.728137 | orchestrator | Monday 09 February 2026 04:24:40 +0000 (0:00:04.881) 0:02:25.111 ******* 2026-02-09 04:24:40.728147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 04:24:40.828826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 04:24:40.828990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 04:24:40.829012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 04:24:40.829026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:24:40.829038 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:24:40.829052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 04:24:40.829066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 04:24:40.829095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 04:24:40.829121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 04:24:40.829133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:24:40.829144 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:24:40.829156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 04:24:40.829167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 04:24:40.829178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 04:24:40.829208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 04:24:41.654467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:24:41.654574 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:24:41.654592 | orchestrator | 2026-02-09 04:24:41.654605 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-09 04:24:41.654618 | orchestrator | Monday 09 February 2026 04:24:40 +0000 (0:00:00.681) 0:02:25.793 ******* 2026-02-09 04:24:41.654631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 04:24:41.654644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 04:24:41.654657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 04:24:41.654669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 04:24:41.654721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:24:41.654734 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:24:41.654753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 04:24:41.654766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 04:24:41.654777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 04:24:41.654788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 04:24:41.654799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:24:41.654819 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:24:41.654838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 04:24:46.360046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 04:24:46.360173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 04:24:46.360194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 04:24:46.360216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 04:24:46.360268 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:24:46.360290 | orchestrator | 2026-02-09 04:24:46.360309 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-09 04:24:46.360328 | orchestrator | Monday 09 February 2026 04:24:42 +0000 (0:00:01.419) 0:02:27.212 ******* 2026-02-09 04:24:46.360346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:24:46.360475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:24:46.360500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:24:46.360521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:24:46.360541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:24:46.360575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:24:46.360595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:24:46.360626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:02.453930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:02.454155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:02.454189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:02.454238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:02.454257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:25:02.454277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:25:02.454336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:25:02.454401 | orchestrator | 2026-02-09 04:25:02.454421 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-09 04:25:02.454442 | orchestrator | Monday 09 February 2026 04:24:47 +0000 (0:00:05.052) 0:02:32.264 ******* 2026-02-09 04:25:02.454460 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-09 04:25:02.454480 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-09 04:25:02.454499 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-09 04:25:02.454517 | orchestrator | 2026-02-09 04:25:02.454535 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-09 04:25:02.454554 | orchestrator | Monday 09 February 2026 04:24:48 +0000 (0:00:01.629) 0:02:33.894 ******* 2026-02-09 04:25:02.454584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:25:02.454625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:25:02.454645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:25:02.454687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:25:17.614739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:25:17.614870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:25:17.614894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:17.614938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:17.614952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:17.614967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:17.615018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:17.615034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:17.615047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:25:17.615071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:25:17.615085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:25:17.615097 | orchestrator | 2026-02-09 04:25:17.615112 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-09 04:25:17.615127 | orchestrator | Monday 09 February 2026 04:25:05 +0000 (0:00:16.871) 0:02:50.765 ******* 2026-02-09 04:25:17.615139 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:25:17.615152 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:25:17.615165 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:25:17.615179 | orchestrator | 2026-02-09 04:25:17.615191 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-09 04:25:17.615204 | orchestrator | Monday 09 February 2026 04:25:07 +0000 (0:00:01.776) 0:02:52.542 ******* 2026-02-09 04:25:17.615215 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-09 04:25:17.615228 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-09 04:25:17.615240 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-09 04:25:17.615252 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-09 04:25:17.615265 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-09 04:25:17.615276 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-09 04:25:17.615288 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-09 04:25:17.615301 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-09 04:25:17.615313 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-09 04:25:17.615325 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-09 04:25:17.615366 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-09 04:25:17.615379 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-09 04:25:17.615392 | orchestrator | 2026-02-09 04:25:17.615404 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-09 04:25:17.615418 | orchestrator | Monday 09 February 2026 04:25:12 +0000 (0:00:05.005) 0:02:57.547 ******* 2026-02-09 04:25:17.615430 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-09 04:25:17.615452 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-09 04:25:17.615479 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-09 04:25:26.050771 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-09 04:25:26.050924 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-09 04:25:26.050964 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-09 04:25:26.050975 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-09 04:25:26.050986 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-09 04:25:26.050995 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-09 04:25:26.051005 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-09 04:25:26.051013 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-09 04:25:26.051022 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-09 04:25:26.051032 | orchestrator | 2026-02-09 04:25:26.051042 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-09 04:25:26.051053 | orchestrator | Monday 09 February 2026 04:25:17 +0000 (0:00:05.025) 0:03:02.573 ******* 2026-02-09 04:25:26.051063 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-09 04:25:26.051073 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-09 04:25:26.051079 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-09 04:25:26.051084 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-09 04:25:26.051090 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-09 04:25:26.051095 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-09 04:25:26.051100 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-09 04:25:26.051105 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-09 04:25:26.051111 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-09 04:25:26.051116 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-09 04:25:26.051121 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-09 04:25:26.051126 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-09 04:25:26.051131 | orchestrator | 2026-02-09 04:25:26.051137 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-09 04:25:26.051166 | orchestrator | Monday 09 February 2026 04:25:22 +0000 (0:00:05.298) 0:03:07.872 ******* 2026-02-09 04:25:26.051175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:25:26.051184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:25:26.051228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 04:25:26.051235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:25:26.051243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:25:26.051249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-09 04:25:26.051255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:26.051262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:26.051274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-09 04:25:26.051287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:26:37.878365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:26:37.878557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-09 04:26:37.878574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:26:37.878584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:26:37.878592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-09 04:26:37.878620 | orchestrator | 2026-02-09 04:26:37.878631 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-09 04:26:37.878640 | orchestrator | Monday 09 February 2026 04:25:26 +0000 (0:00:04.088) 0:03:11.960 ******* 2026-02-09 04:26:37.878648 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:26:37.878657 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:26:37.878665 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:26:37.878673 | orchestrator | 2026-02-09 04:26:37.878681 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-09 04:26:37.878689 | orchestrator | Monday 09 February 2026 04:25:27 +0000 (0:00:00.616) 0:03:12.577 ******* 2026-02-09 04:26:37.878707 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:26:37.878715 | orchestrator | 2026-02-09 04:26:37.878723 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-09 04:26:37.878731 | orchestrator | Monday 09 February 2026 04:25:29 +0000 (0:00:02.013) 0:03:14.590 ******* 2026-02-09 04:26:37.878739 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:26:37.878747 | orchestrator | 2026-02-09 04:26:37.878754 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-09 04:26:37.878762 | orchestrator | Monday 09 February 2026 04:25:31 +0000 (0:00:01.994) 0:03:16.584 ******* 2026-02-09 04:26:37.878770 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:26:37.878777 | orchestrator | 2026-02-09 04:26:37.878785 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-09 04:26:37.878794 | orchestrator | Monday 09 February 2026 04:25:33 +0000 (0:00:02.158) 0:03:18.742 ******* 2026-02-09 04:26:37.878817 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:26:37.878827 | orchestrator | 2026-02-09 04:26:37.878836 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-09 04:26:37.878847 | orchestrator | Monday 09 February 2026 04:25:35 +0000 (0:00:02.015) 0:03:20.758 ******* 2026-02-09 04:26:37.878856 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:26:37.878865 | orchestrator | 2026-02-09 04:26:37.878874 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-09 04:26:37.878883 | orchestrator | Monday 09 February 2026 04:25:56 +0000 (0:00:20.771) 0:03:41.529 ******* 2026-02-09 04:26:37.878892 | orchestrator | 2026-02-09 04:26:37.878902 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-09 04:26:37.878911 | orchestrator | Monday 09 February 2026 04:25:56 +0000 (0:00:00.069) 0:03:41.599 ******* 2026-02-09 04:26:37.878920 | orchestrator | 2026-02-09 04:26:37.878930 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-09 04:26:37.878939 | orchestrator | Monday 09 February 2026 04:25:56 +0000 (0:00:00.070) 0:03:41.669 ******* 2026-02-09 04:26:37.878946 | orchestrator | 2026-02-09 04:26:37.878954 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-09 04:26:37.878962 | orchestrator | Monday 09 February 2026 04:25:56 +0000 (0:00:00.069) 0:03:41.739 ******* 2026-02-09 04:26:37.878970 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:26:37.878978 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:26:37.878986 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:26:37.878994 | orchestrator | 2026-02-09 04:26:37.879001 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-09 04:26:37.879011 | orchestrator | Monday 09 February 2026 04:26:07 +0000 (0:00:11.120) 0:03:52.859 ******* 2026-02-09 04:26:37.879023 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:26:37.879036 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:26:37.879068 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:26:37.879081 | orchestrator | 2026-02-09 04:26:37.879093 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-09 04:26:37.879106 | orchestrator | Monday 09 February 2026 04:26:19 +0000 (0:00:11.151) 0:04:04.010 ******* 2026-02-09 04:26:37.879118 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:26:37.879131 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:26:37.879143 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:26:37.879156 | orchestrator | 2026-02-09 04:26:37.879169 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-09 04:26:37.879182 | orchestrator | Monday 09 February 2026 04:26:24 +0000 (0:00:05.152) 0:04:09.163 ******* 2026-02-09 04:26:37.879195 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:26:37.879208 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:26:37.879220 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:26:37.879228 | orchestrator | 2026-02-09 04:26:37.879235 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-09 04:26:37.879243 | orchestrator | Monday 09 February 2026 04:26:32 +0000 (0:00:08.262) 0:04:17.425 ******* 2026-02-09 04:26:37.879251 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:26:37.879259 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:26:37.879266 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:26:37.879274 | orchestrator | 2026-02-09 04:26:37.879328 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:26:37.879338 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 04:26:37.879348 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 04:26:37.879356 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 04:26:37.879364 | orchestrator | 2026-02-09 04:26:37.879372 | orchestrator | 2026-02-09 04:26:37.879380 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:26:37.879388 | orchestrator | Monday 09 February 2026 04:26:37 +0000 (0:00:05.400) 0:04:22.826 ******* 2026-02-09 04:26:37.879396 | orchestrator | =============================================================================== 2026-02-09 04:26:37.879404 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.77s 2026-02-09 04:26:37.879412 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.87s 2026-02-09 04:26:37.879419 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.53s 2026-02-09 04:26:37.879427 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.40s 2026-02-09 04:26:37.879435 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.15s 2026-02-09 04:26:37.879443 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.12s 2026-02-09 04:26:37.879451 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.96s 2026-02-09 04:26:37.879466 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.26s 2026-02-09 04:26:37.879474 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.76s 2026-02-09 04:26:37.879482 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.68s 2026-02-09 04:26:37.879489 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.02s 2026-02-09 04:26:37.879497 | orchestrator | octavia : Get security groups for octavia ------------------------------- 5.97s 2026-02-09 04:26:37.879505 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.40s 2026-02-09 04:26:37.879513 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.34s 2026-02-09 04:26:37.879529 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.30s 2026-02-09 04:26:38.295729 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.15s 2026-02-09 04:26:38.295851 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.07s 2026-02-09 04:26:38.295866 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.05s 2026-02-09 04:26:38.295878 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.03s 2026-02-09 04:26:38.295889 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.01s 2026-02-09 04:26:40.856605 | orchestrator | 2026-02-09 04:26:40 | INFO  | Task e55479a2-af6e-4294-8c1f-82b556616525 (ceilometer) was prepared for execution. 2026-02-09 04:26:40.856686 | orchestrator | 2026-02-09 04:26:40 | INFO  | It takes a moment until task e55479a2-af6e-4294-8c1f-82b556616525 (ceilometer) has been started and output is visible here. 2026-02-09 04:27:03.605738 | orchestrator | 2026-02-09 04:27:03.605849 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:27:03.605868 | orchestrator | 2026-02-09 04:27:03.605881 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:27:03.605892 | orchestrator | Monday 09 February 2026 04:26:45 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-02-09 04:27:03.605903 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:27:03.605916 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:27:03.605923 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:27:03.605930 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:27:03.605937 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:27:03.605943 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:27:03.605949 | orchestrator | 2026-02-09 04:27:03.605956 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:27:03.605963 | orchestrator | Monday 09 February 2026 04:26:46 +0000 (0:00:00.728) 0:00:01.000 ******* 2026-02-09 04:27:03.605969 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-09 04:27:03.605976 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-09 04:27:03.605982 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-09 04:27:03.605988 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-09 04:27:03.605994 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-09 04:27:03.606000 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-09 04:27:03.606007 | orchestrator | 2026-02-09 04:27:03.606013 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-09 04:27:03.606053 | orchestrator | 2026-02-09 04:27:03.606060 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-09 04:27:03.606066 | orchestrator | Monday 09 February 2026 04:26:46 +0000 (0:00:00.671) 0:00:01.672 ******* 2026-02-09 04:27:03.606073 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 04:27:03.606081 | orchestrator | 2026-02-09 04:27:03.606087 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-09 04:27:03.606093 | orchestrator | Monday 09 February 2026 04:26:48 +0000 (0:00:01.325) 0:00:02.997 ******* 2026-02-09 04:27:03.606100 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:03.606106 | orchestrator | 2026-02-09 04:27:03.606119 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-09 04:27:03.606126 | orchestrator | Monday 09 February 2026 04:26:48 +0000 (0:00:00.125) 0:00:03.122 ******* 2026-02-09 04:27:03.606132 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:03.606138 | orchestrator | 2026-02-09 04:27:03.606145 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-09 04:27:03.606151 | orchestrator | Monday 09 February 2026 04:26:48 +0000 (0:00:00.149) 0:00:03.272 ******* 2026-02-09 04:27:03.606157 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 04:27:03.606183 | orchestrator | 2026-02-09 04:27:03.606191 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-09 04:27:03.606202 | orchestrator | Monday 09 February 2026 04:26:51 +0000 (0:00:03.518) 0:00:06.790 ******* 2026-02-09 04:27:03.606218 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:27:03.606228 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-09 04:27:03.606238 | orchestrator | 2026-02-09 04:27:03.606247 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-09 04:27:03.606257 | orchestrator | Monday 09 February 2026 04:26:55 +0000 (0:00:03.354) 0:00:10.145 ******* 2026-02-09 04:27:03.606311 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 04:27:03.606322 | orchestrator | 2026-02-09 04:27:03.606332 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-09 04:27:03.606342 | orchestrator | Monday 09 February 2026 04:26:58 +0000 (0:00:02.893) 0:00:13.038 ******* 2026-02-09 04:27:03.606353 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-09 04:27:03.606377 | orchestrator | 2026-02-09 04:27:03.606388 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-09 04:27:03.606397 | orchestrator | Monday 09 February 2026 04:27:01 +0000 (0:00:03.926) 0:00:16.964 ******* 2026-02-09 04:27:03.606406 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:03.606416 | orchestrator | 2026-02-09 04:27:03.606426 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-09 04:27:03.606437 | orchestrator | Monday 09 February 2026 04:27:02 +0000 (0:00:00.145) 0:00:17.110 ******* 2026-02-09 04:27:03.606451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:03.606487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:03.606500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:03.606513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:03.606535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:03.606546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:03.606553 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:03.606567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:08.474645 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:08.474754 | orchestrator | 2026-02-09 04:27:08.474772 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-09 04:27:08.474786 | orchestrator | Monday 09 February 2026 04:27:03 +0000 (0:00:01.450) 0:00:18.561 ******* 2026-02-09 04:27:08.474824 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-09 04:27:08.474841 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:27:08.474860 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-09 04:27:08.474877 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-09 04:27:08.474894 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-09 04:27:08.474911 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-09 04:27:08.474928 | orchestrator | 2026-02-09 04:27:08.474946 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-09 04:27:08.474966 | orchestrator | Monday 09 February 2026 04:27:05 +0000 (0:00:01.701) 0:00:20.262 ******* 2026-02-09 04:27:08.474985 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:27:08.475004 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:27:08.475024 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:27:08.475043 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:27:08.475061 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:27:08.475080 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:27:08.475093 | orchestrator | 2026-02-09 04:27:08.475104 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-09 04:27:08.475115 | orchestrator | Monday 09 February 2026 04:27:05 +0000 (0:00:00.606) 0:00:20.868 ******* 2026-02-09 04:27:08.475126 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:08.475137 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:08.475148 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:08.475158 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:08.475171 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:08.475185 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:08.475198 | orchestrator | 2026-02-09 04:27:08.475211 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-09 04:27:08.475225 | orchestrator | Monday 09 February 2026 04:27:06 +0000 (0:00:00.829) 0:00:21.698 ******* 2026-02-09 04:27:08.475238 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:27:08.475251 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:27:08.475305 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:27:08.475318 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:27:08.475369 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:27:08.475383 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:27:08.475395 | orchestrator | 2026-02-09 04:27:08.475409 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-09 04:27:08.475422 | orchestrator | Monday 09 February 2026 04:27:07 +0000 (0:00:00.633) 0:00:22.331 ******* 2026-02-09 04:27:08.475442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:08.475458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:08.475473 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:08.475509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:08.475539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:08.475554 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:08.475568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:08.475580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:08.475597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:08.475609 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:08.475620 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:08.475632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:08.475650 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:08.475671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:13.254445 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:13.254564 | orchestrator | 2026-02-09 04:27:13.254627 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-09 04:27:13.254650 | orchestrator | Monday 09 February 2026 04:27:08 +0000 (0:00:01.104) 0:00:23.436 ******* 2026-02-09 04:27:13.254673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:13.254694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:13.254707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:13.254736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:13.254748 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:13.254759 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:13.254771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:13.254804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:13.254837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:13.254850 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:13.254861 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:13.254872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:13.254883 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:13.254900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:13.254916 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:13.254935 | orchestrator | 2026-02-09 04:27:13.254955 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-09 04:27:13.254975 | orchestrator | Monday 09 February 2026 04:27:09 +0000 (0:00:00.923) 0:00:24.359 ******* 2026-02-09 04:27:13.255008 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:27:13.255030 | orchestrator | 2026-02-09 04:27:13.255049 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-09 04:27:13.255069 | orchestrator | Monday 09 February 2026 04:27:10 +0000 (0:00:00.708) 0:00:25.068 ******* 2026-02-09 04:27:13.255087 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:27:13.255106 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:27:13.255124 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:27:13.255141 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:27:13.255160 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:27:13.255179 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:27:13.255197 | orchestrator | 2026-02-09 04:27:13.255213 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-09 04:27:13.255224 | orchestrator | Monday 09 February 2026 04:27:10 +0000 (0:00:00.807) 0:00:25.875 ******* 2026-02-09 04:27:13.255235 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:27:13.255246 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:27:13.255304 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:27:13.255324 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:27:13.255343 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:27:13.255361 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:27:13.255379 | orchestrator | 2026-02-09 04:27:13.255391 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-09 04:27:13.255402 | orchestrator | Monday 09 February 2026 04:27:11 +0000 (0:00:00.938) 0:00:26.814 ******* 2026-02-09 04:27:13.255412 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:13.255423 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:13.255434 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:13.255445 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:13.255455 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:13.255465 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:13.255476 | orchestrator | 2026-02-09 04:27:13.255487 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-09 04:27:13.255498 | orchestrator | Monday 09 February 2026 04:27:12 +0000 (0:00:00.801) 0:00:27.615 ******* 2026-02-09 04:27:13.255508 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:13.255519 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:13.255529 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:13.255541 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:13.255630 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:13.255642 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:13.255653 | orchestrator | 2026-02-09 04:27:18.332119 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-09 04:27:18.332233 | orchestrator | Monday 09 February 2026 04:27:13 +0000 (0:00:00.604) 0:00:28.220 ******* 2026-02-09 04:27:18.332249 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:27:18.332332 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-09 04:27:18.332344 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-09 04:27:18.332355 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-09 04:27:18.332366 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-09 04:27:18.332377 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-09 04:27:18.332388 | orchestrator | 2026-02-09 04:27:18.332400 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-09 04:27:18.332411 | orchestrator | Monday 09 February 2026 04:27:14 +0000 (0:00:01.314) 0:00:29.535 ******* 2026-02-09 04:27:18.332426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:18.332469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:18.332513 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:18.332540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:18.332553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:18.332564 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:18.332576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:18.332607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:18.332620 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:18.332632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:18.332654 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:18.332668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:18.332682 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:18.332700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:18.332713 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:18.332726 | orchestrator | 2026-02-09 04:27:18.332738 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-09 04:27:18.332749 | orchestrator | Monday 09 February 2026 04:27:15 +0000 (0:00:00.856) 0:00:30.391 ******* 2026-02-09 04:27:18.332760 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:18.332773 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:18.332792 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:18.332810 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:18.332827 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:18.332845 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:18.332863 | orchestrator | 2026-02-09 04:27:18.332880 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-09 04:27:18.332898 | orchestrator | Monday 09 February 2026 04:27:16 +0000 (0:00:00.853) 0:00:31.245 ******* 2026-02-09 04:27:18.332915 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:27:18.332932 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-09 04:27:18.332948 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-09 04:27:18.332967 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-09 04:27:18.332986 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-09 04:27:18.333003 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-09 04:27:18.333022 | orchestrator | 2026-02-09 04:27:18.333040 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-09 04:27:18.333059 | orchestrator | Monday 09 February 2026 04:27:17 +0000 (0:00:01.458) 0:00:32.703 ******* 2026-02-09 04:27:18.333093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:24.195146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:24.195289 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:24.195374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:24.195392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:24.195421 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:24.195434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:24.195446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:24.195458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:24.195494 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:24.195506 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:24.195538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:24.195550 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:24.195561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:24.195572 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:24.195584 | orchestrator | 2026-02-09 04:27:24.195596 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-09 04:27:24.195608 | orchestrator | Monday 09 February 2026 04:27:18 +0000 (0:00:01.185) 0:00:33.889 ******* 2026-02-09 04:27:24.195619 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:24.195630 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:24.195641 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:24.195651 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:24.195662 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:24.195676 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:24.195689 | orchestrator | 2026-02-09 04:27:24.195701 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-09 04:27:24.195720 | orchestrator | Monday 09 February 2026 04:27:19 +0000 (0:00:00.855) 0:00:34.745 ******* 2026-02-09 04:27:24.195732 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:24.195745 | orchestrator | 2026-02-09 04:27:24.195758 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-09 04:27:24.195770 | orchestrator | Monday 09 February 2026 04:27:19 +0000 (0:00:00.150) 0:00:34.895 ******* 2026-02-09 04:27:24.195783 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:24.195795 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:24.195810 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:24.195829 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:24.195847 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:24.195865 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:24.195884 | orchestrator | 2026-02-09 04:27:24.195904 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-09 04:27:24.195921 | orchestrator | Monday 09 February 2026 04:27:20 +0000 (0:00:00.631) 0:00:35.527 ******* 2026-02-09 04:27:24.195941 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 04:27:24.195973 | orchestrator | 2026-02-09 04:27:24.195985 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-09 04:27:24.195996 | orchestrator | Monday 09 February 2026 04:27:21 +0000 (0:00:01.374) 0:00:36.901 ******* 2026-02-09 04:27:24.196008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:24.196029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:24.753061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:24.753150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:24.753178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:24.753186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:24.753212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:24.753218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:24.753240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:24.753311 | orchestrator | 2026-02-09 04:27:24.753322 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-09 04:27:24.753330 | orchestrator | Monday 09 February 2026 04:27:24 +0000 (0:00:02.251) 0:00:39.153 ******* 2026-02-09 04:27:24.753338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:24.753350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:24.753357 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:24.753372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:24.753378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:24.753383 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:24.753389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:24.753402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:26.882282 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:26.882388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:26.882409 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:26.882438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:26.882474 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:26.882487 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:26.882499 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:26.882510 | orchestrator | 2026-02-09 04:27:26.882522 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-09 04:27:26.882535 | orchestrator | Monday 09 February 2026 04:27:25 +0000 (0:00:00.875) 0:00:40.028 ******* 2026-02-09 04:27:26.882547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:26.882561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:26.882591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:26.882604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:26.882621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:26.882641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:26.882653 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:26.882664 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:26.882675 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:26.882686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:26.882697 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:26.882709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:26.882721 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:26.882741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:34.462193 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:34.462340 | orchestrator | 2026-02-09 04:27:34.462354 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-09 04:27:34.462364 | orchestrator | Monday 09 February 2026 04:27:26 +0000 (0:00:01.812) 0:00:41.840 ******* 2026-02-09 04:27:34.462393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:34.463157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:34.463172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:34.463182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:34.463191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:34.463215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:34.463232 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:34.463268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:34.463278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:34.463286 | orchestrator | 2026-02-09 04:27:34.463294 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-09 04:27:34.463301 | orchestrator | Monday 09 February 2026 04:27:29 +0000 (0:00:02.527) 0:00:44.367 ******* 2026-02-09 04:27:34.463309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:34.463317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:34.463330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:44.211151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:44.211272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:44.211285 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:44.211294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:44.211302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:44.211309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:44.211334 | orchestrator | 2026-02-09 04:27:44.211343 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-09 04:27:44.211363 | orchestrator | Monday 09 February 2026 04:27:34 +0000 (0:00:05.054) 0:00:49.422 ******* 2026-02-09 04:27:44.211371 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:27:44.211380 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-09 04:27:44.211387 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-09 04:27:44.211393 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-09 04:27:44.211400 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-09 04:27:44.211406 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-09 04:27:44.211413 | orchestrator | 2026-02-09 04:27:44.211419 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-09 04:27:44.211426 | orchestrator | Monday 09 February 2026 04:27:36 +0000 (0:00:01.731) 0:00:51.154 ******* 2026-02-09 04:27:44.211433 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:44.211439 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:44.211446 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:44.211452 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:44.211458 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:44.211465 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:44.211471 | orchestrator | 2026-02-09 04:27:44.211481 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-09 04:27:44.211486 | orchestrator | Monday 09 February 2026 04:27:36 +0000 (0:00:00.633) 0:00:51.787 ******* 2026-02-09 04:27:44.211490 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:44.211493 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:44.211497 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:27:44.211501 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:44.211504 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:27:44.211508 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:27:44.211512 | orchestrator | 2026-02-09 04:27:44.211515 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-09 04:27:44.211519 | orchestrator | Monday 09 February 2026 04:27:38 +0000 (0:00:01.694) 0:00:53.482 ******* 2026-02-09 04:27:44.211523 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:44.211526 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:44.211530 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:44.211534 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:27:44.211537 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:27:44.211541 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:27:44.211545 | orchestrator | 2026-02-09 04:27:44.211548 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-09 04:27:44.211552 | orchestrator | Monday 09 February 2026 04:27:40 +0000 (0:00:01.500) 0:00:54.982 ******* 2026-02-09 04:27:44.211556 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:27:44.211560 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-09 04:27:44.211563 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-09 04:27:44.211567 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-09 04:27:44.211571 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-09 04:27:44.211574 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-09 04:27:44.211578 | orchestrator | 2026-02-09 04:27:44.211582 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-09 04:27:44.211586 | orchestrator | Monday 09 February 2026 04:27:41 +0000 (0:00:01.733) 0:00:56.716 ******* 2026-02-09 04:27:44.211590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:44.211600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:44.211605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:44.211612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:45.085656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:45.085757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:27:45.085795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:45.085808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:45.085819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:27:45.085830 | orchestrator | 2026-02-09 04:27:45.085842 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-09 04:27:45.085854 | orchestrator | Monday 09 February 2026 04:27:44 +0000 (0:00:02.451) 0:00:59.168 ******* 2026-02-09 04:27:45.085864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:45.085899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:45.085911 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:45.085923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:45.085943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:45.085953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:45.085963 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:45.085973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:45.085983 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:45.085994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:45.086004 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:45.086097 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:48.638834 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:48.638959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:48.639001 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:48.639010 | orchestrator | 2026-02-09 04:27:48.639018 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-09 04:27:48.639029 | orchestrator | Monday 09 February 2026 04:27:45 +0000 (0:00:00.881) 0:01:00.049 ******* 2026-02-09 04:27:48.639042 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:48.639053 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:48.639065 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:48.639077 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:48.639116 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:48.639128 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:48.639139 | orchestrator | 2026-02-09 04:27:48.639151 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-09 04:27:48.639163 | orchestrator | Monday 09 February 2026 04:27:45 +0000 (0:00:00.853) 0:01:00.903 ******* 2026-02-09 04:27:48.639178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:48.639194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:48.639203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:48.639211 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:27:48.639250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:48.639267 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:27:48.639292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-09 04:27:48.639300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 04:27:48.639308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:48.639316 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:27:48.639323 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:27:48.639331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:48.639338 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:27:48.639346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-09 04:27:48.639358 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:27:48.639368 | orchestrator | 2026-02-09 04:27:48.639381 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-09 04:27:48.639399 | orchestrator | Monday 09 February 2026 04:27:46 +0000 (0:00:00.973) 0:01:01.877 ******* 2026-02-09 04:27:48.639419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:20.049336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:20.049451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:20.049467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:20.049480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:20.049490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:20.049523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:28:20.049551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:28:20.049562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-09 04:28:20.049573 | orchestrator | 2026-02-09 04:28:20.049584 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-09 04:28:20.049595 | orchestrator | Monday 09 February 2026 04:27:48 +0000 (0:00:01.720) 0:01:03.597 ******* 2026-02-09 04:28:20.049606 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:28:20.049616 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:28:20.049626 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:28:20.049635 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:28:20.049644 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:28:20.049654 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:28:20.049663 | orchestrator | 2026-02-09 04:28:20.049673 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-09 04:28:20.049683 | orchestrator | Monday 09 February 2026 04:27:49 +0000 (0:00:00.661) 0:01:04.259 ******* 2026-02-09 04:28:20.049692 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:28:20.049701 | orchestrator | 2026-02-09 04:28:20.049711 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-09 04:28:20.049721 | orchestrator | Monday 09 February 2026 04:27:53 +0000 (0:00:04.523) 0:01:08.782 ******* 2026-02-09 04:28:20.049730 | orchestrator | 2026-02-09 04:28:20.049740 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-09 04:28:20.049749 | orchestrator | Monday 09 February 2026 04:27:53 +0000 (0:00:00.073) 0:01:08.856 ******* 2026-02-09 04:28:20.049758 | orchestrator | 2026-02-09 04:28:20.049768 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-09 04:28:20.049777 | orchestrator | Monday 09 February 2026 04:27:53 +0000 (0:00:00.074) 0:01:08.930 ******* 2026-02-09 04:28:20.049795 | orchestrator | 2026-02-09 04:28:20.049805 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-09 04:28:20.049814 | orchestrator | Monday 09 February 2026 04:27:54 +0000 (0:00:00.271) 0:01:09.201 ******* 2026-02-09 04:28:20.049824 | orchestrator | 2026-02-09 04:28:20.049837 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-09 04:28:20.049894 | orchestrator | Monday 09 February 2026 04:27:54 +0000 (0:00:00.104) 0:01:09.306 ******* 2026-02-09 04:28:20.049908 | orchestrator | 2026-02-09 04:28:20.049920 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-09 04:28:20.049931 | orchestrator | Monday 09 February 2026 04:27:54 +0000 (0:00:00.075) 0:01:09.381 ******* 2026-02-09 04:28:20.049942 | orchestrator | 2026-02-09 04:28:20.049954 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-09 04:28:20.049965 | orchestrator | Monday 09 February 2026 04:27:54 +0000 (0:00:00.080) 0:01:09.462 ******* 2026-02-09 04:28:20.049977 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:28:20.049988 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:28:20.049999 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:28:20.050010 | orchestrator | 2026-02-09 04:28:20.050084 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-09 04:28:20.050096 | orchestrator | Monday 09 February 2026 04:28:04 +0000 (0:00:10.140) 0:01:19.603 ******* 2026-02-09 04:28:20.050107 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:28:20.050118 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:28:20.050129 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:28:20.050141 | orchestrator | 2026-02-09 04:28:20.050152 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-09 04:28:20.050164 | orchestrator | Monday 09 February 2026 04:28:09 +0000 (0:00:04.676) 0:01:24.279 ******* 2026-02-09 04:28:20.050176 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:28:20.050188 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:28:20.050198 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:28:20.050207 | orchestrator | 2026-02-09 04:28:20.050235 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:28:20.050246 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-09 04:28:20.050258 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-09 04:28:20.050277 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-09 04:28:20.546580 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-09 04:28:20.546650 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-09 04:28:20.546657 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-09 04:28:20.546662 | orchestrator | 2026-02-09 04:28:20.546668 | orchestrator | 2026-02-09 04:28:20.546673 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:28:20.546679 | orchestrator | Monday 09 February 2026 04:28:20 +0000 (0:00:10.717) 0:01:34.997 ******* 2026-02-09 04:28:20.546684 | orchestrator | =============================================================================== 2026-02-09 04:28:20.546688 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 10.72s 2026-02-09 04:28:20.546693 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.14s 2026-02-09 04:28:20.546697 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 5.05s 2026-02-09 04:28:20.546721 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 4.68s 2026-02-09 04:28:20.546725 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.52s 2026-02-09 04:28:20.546729 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.93s 2026-02-09 04:28:20.546734 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.52s 2026-02-09 04:28:20.546738 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.35s 2026-02-09 04:28:20.546742 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 2.89s 2026-02-09 04:28:20.546746 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.53s 2026-02-09 04:28:20.546751 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.45s 2026-02-09 04:28:20.546755 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.25s 2026-02-09 04:28:20.546759 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.81s 2026-02-09 04:28:20.546764 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.73s 2026-02-09 04:28:20.546768 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.73s 2026-02-09 04:28:20.546772 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.72s 2026-02-09 04:28:20.546776 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.70s 2026-02-09 04:28:20.546783 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.69s 2026-02-09 04:28:20.546792 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.50s 2026-02-09 04:28:20.546799 | orchestrator | ceilometer : Check custom gnocchi_resources.yaml exists ----------------- 1.46s 2026-02-09 04:28:23.144049 | orchestrator | 2026-02-09 04:28:23 | INFO  | Task 6719683c-ccf7-41b7-b924-08d7099ec6b1 (aodh) was prepared for execution. 2026-02-09 04:28:23.144162 | orchestrator | 2026-02-09 04:28:23 | INFO  | It takes a moment until task 6719683c-ccf7-41b7-b924-08d7099ec6b1 (aodh) has been started and output is visible here. 2026-02-09 04:28:54.320441 | orchestrator | 2026-02-09 04:28:54.320546 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:28:54.320558 | orchestrator | 2026-02-09 04:28:54.320565 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:28:54.320571 | orchestrator | Monday 09 February 2026 04:28:27 +0000 (0:00:00.303) 0:00:00.303 ******* 2026-02-09 04:28:54.320577 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:28:54.320583 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:28:54.320589 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:28:54.320594 | orchestrator | 2026-02-09 04:28:54.320600 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:28:54.320606 | orchestrator | Monday 09 February 2026 04:28:27 +0000 (0:00:00.372) 0:00:00.675 ******* 2026-02-09 04:28:54.320611 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-09 04:28:54.320617 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-09 04:28:54.320623 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-09 04:28:54.320628 | orchestrator | 2026-02-09 04:28:54.320634 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-09 04:28:54.320639 | orchestrator | 2026-02-09 04:28:54.320645 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-09 04:28:54.320650 | orchestrator | Monday 09 February 2026 04:28:28 +0000 (0:00:00.489) 0:00:01.164 ******* 2026-02-09 04:28:54.320656 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:28:54.320662 | orchestrator | 2026-02-09 04:28:54.320668 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-09 04:28:54.320673 | orchestrator | Monday 09 February 2026 04:28:29 +0000 (0:00:00.635) 0:00:01.800 ******* 2026-02-09 04:28:54.320700 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-09 04:28:54.320709 | orchestrator | 2026-02-09 04:28:54.320717 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-09 04:28:54.320726 | orchestrator | Monday 09 February 2026 04:28:32 +0000 (0:00:03.560) 0:00:05.361 ******* 2026-02-09 04:28:54.320735 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-09 04:28:54.320744 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-09 04:28:54.320753 | orchestrator | 2026-02-09 04:28:54.320762 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-09 04:28:54.320770 | orchestrator | Monday 09 February 2026 04:28:38 +0000 (0:00:06.138) 0:00:11.500 ******* 2026-02-09 04:28:54.320779 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 04:28:54.320789 | orchestrator | 2026-02-09 04:28:54.320798 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-09 04:28:54.320808 | orchestrator | Monday 09 February 2026 04:28:42 +0000 (0:00:03.236) 0:00:14.736 ******* 2026-02-09 04:28:54.320813 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:28:54.320819 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-09 04:28:54.320829 | orchestrator | 2026-02-09 04:28:54.320837 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-09 04:28:54.320846 | orchestrator | Monday 09 February 2026 04:28:45 +0000 (0:00:03.731) 0:00:18.468 ******* 2026-02-09 04:28:54.320855 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 04:28:54.320865 | orchestrator | 2026-02-09 04:28:54.320874 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-09 04:28:54.320883 | orchestrator | Monday 09 February 2026 04:28:48 +0000 (0:00:03.010) 0:00:21.478 ******* 2026-02-09 04:28:54.320892 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-09 04:28:54.320900 | orchestrator | 2026-02-09 04:28:54.320906 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-09 04:28:54.320912 | orchestrator | Monday 09 February 2026 04:28:52 +0000 (0:00:03.582) 0:00:25.060 ******* 2026-02-09 04:28:54.320920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:28:54.320944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:28:54.320958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:28:54.320965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:28:54.320972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:28:54.320977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:28:54.320984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:54.320996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:55.620623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:55.620728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:55.620739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:55.620747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:28:55.620756 | orchestrator | 2026-02-09 04:28:55.620765 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-09 04:28:55.620774 | orchestrator | Monday 09 February 2026 04:28:54 +0000 (0:00:01.922) 0:00:26.983 ******* 2026-02-09 04:28:55.620781 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:28:55.620789 | orchestrator | 2026-02-09 04:28:55.620797 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-09 04:28:55.620804 | orchestrator | Monday 09 February 2026 04:28:54 +0000 (0:00:00.133) 0:00:27.116 ******* 2026-02-09 04:28:55.620812 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:28:55.620819 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:28:55.620826 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:28:55.620833 | orchestrator | 2026-02-09 04:28:55.620841 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-09 04:28:55.620848 | orchestrator | Monday 09 February 2026 04:28:54 +0000 (0:00:00.513) 0:00:27.630 ******* 2026-02-09 04:28:55.620856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 04:28:55.620883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 04:28:55.620892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:28:55.620900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 04:28:55.620908 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:28:55.620916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 04:28:55.620924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 04:28:55.620931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:28:55.620949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 04:29:00.502792 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:29:00.502906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 04:29:00.502927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 04:29:00.502941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:29:00.502953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 04:29:00.502964 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:29:00.502976 | orchestrator | 2026-02-09 04:29:00.502988 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-09 04:29:00.503000 | orchestrator | Monday 09 February 2026 04:28:55 +0000 (0:00:00.660) 0:00:28.290 ******* 2026-02-09 04:29:00.503012 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:29:00.503049 | orchestrator | 2026-02-09 04:29:00.503061 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-09 04:29:00.503072 | orchestrator | Monday 09 February 2026 04:28:56 +0000 (0:00:00.778) 0:00:29.069 ******* 2026-02-09 04:29:00.503083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:00.503114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:00.503127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:00.503138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:00.503150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:00.503169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:00.503180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:00.503232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:01.278085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:01.278218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:01.278238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:01.278251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:01.278288 | orchestrator | 2026-02-09 04:29:01.278302 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-09 04:29:01.278315 | orchestrator | Monday 09 February 2026 04:29:00 +0000 (0:00:04.102) 0:00:33.171 ******* 2026-02-09 04:29:01.278328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 04:29:01.278340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 04:29:01.278370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:29:01.278383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 04:29:01.278394 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:29:01.278407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 04:29:01.278426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 04:29:01.278437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:29:01.278449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 04:29:01.278460 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:29:01.278480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 04:29:02.367039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 04:29:02.367141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:29:02.367180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 04:29:02.367246 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:29:02.367261 | orchestrator | 2026-02-09 04:29:02.367273 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-09 04:29:02.367286 | orchestrator | Monday 09 February 2026 04:29:01 +0000 (0:00:00.775) 0:00:33.947 ******* 2026-02-09 04:29:02.367298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 04:29:02.367310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 04:29:02.367322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:29:02.367353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 04:29:02.367365 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:29:02.367377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 04:29:02.367397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 04:29:02.367409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:29:02.367420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 04:29:02.367432 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:29:02.367450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-09 04:29:06.429853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 04:29:06.429986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 04:29:06.430004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 04:29:06.430078 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:29:06.430096 | orchestrator | 2026-02-09 04:29:06.430108 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-09 04:29:06.430121 | orchestrator | Monday 09 February 2026 04:29:02 +0000 (0:00:01.090) 0:00:35.038 ******* 2026-02-09 04:29:06.430133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:06.430146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:06.430177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:06.430253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:06.430267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:06.430279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:06.430290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:06.430302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:06.430313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:06.430334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:15.417783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:15.417894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:15.417910 | orchestrator | 2026-02-09 04:29:15.417924 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-09 04:29:15.417937 | orchestrator | Monday 09 February 2026 04:29:06 +0000 (0:00:04.057) 0:00:39.095 ******* 2026-02-09 04:29:15.417962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:15.417974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:15.417986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:15.418101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:15.418118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:15.418130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:15.418142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:15.418153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:15.418164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:15.418247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:15.418270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:20.478529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:20.478630 | orchestrator | 2026-02-09 04:29:20.478645 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-09 04:29:20.478655 | orchestrator | Monday 09 February 2026 04:29:15 +0000 (0:00:08.988) 0:00:48.084 ******* 2026-02-09 04:29:20.478665 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:29:20.478675 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:29:20.478684 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:29:20.478692 | orchestrator | 2026-02-09 04:29:20.478701 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-09 04:29:20.478710 | orchestrator | Monday 09 February 2026 04:29:17 +0000 (0:00:01.847) 0:00:49.931 ******* 2026-02-09 04:29:20.478720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:20.478731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:20.478761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-09 04:29:20.478786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:20.478796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:20.478806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-09 04:29:20.478815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:20.478824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:20.478839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:20.478849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:29:20.478864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:30:16.714945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-09 04:30:16.715069 | orchestrator | 2026-02-09 04:30:16.715091 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-09 04:30:16.715108 | orchestrator | Monday 09 February 2026 04:29:20 +0000 (0:00:03.214) 0:00:53.146 ******* 2026-02-09 04:30:16.715122 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:30:16.715137 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:30:16.715152 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:30:16.715165 | orchestrator | 2026-02-09 04:30:16.715179 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-09 04:30:16.715193 | orchestrator | Monday 09 February 2026 04:29:20 +0000 (0:00:00.327) 0:00:53.474 ******* 2026-02-09 04:30:16.715205 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:30:16.715218 | orchestrator | 2026-02-09 04:30:16.715231 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-09 04:30:16.715244 | orchestrator | Monday 09 February 2026 04:29:22 +0000 (0:00:02.027) 0:00:55.501 ******* 2026-02-09 04:30:16.715258 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:30:16.715271 | orchestrator | 2026-02-09 04:30:16.715286 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-09 04:30:16.715330 | orchestrator | Monday 09 February 2026 04:29:24 +0000 (0:00:02.096) 0:00:57.597 ******* 2026-02-09 04:30:16.715345 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:30:16.715359 | orchestrator | 2026-02-09 04:30:16.715373 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-09 04:30:16.715387 | orchestrator | Monday 09 February 2026 04:29:37 +0000 (0:00:12.249) 0:01:09.847 ******* 2026-02-09 04:30:16.715440 | orchestrator | 2026-02-09 04:30:16.715453 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-09 04:30:16.715467 | orchestrator | Monday 09 February 2026 04:29:37 +0000 (0:00:00.083) 0:01:09.931 ******* 2026-02-09 04:30:16.715566 | orchestrator | 2026-02-09 04:30:16.715578 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-09 04:30:16.715591 | orchestrator | Monday 09 February 2026 04:29:37 +0000 (0:00:00.076) 0:01:10.007 ******* 2026-02-09 04:30:16.715603 | orchestrator | 2026-02-09 04:30:16.715616 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-09 04:30:16.715629 | orchestrator | Monday 09 February 2026 04:29:37 +0000 (0:00:00.276) 0:01:10.284 ******* 2026-02-09 04:30:16.715642 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:30:16.715657 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:30:16.715670 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:30:16.715683 | orchestrator | 2026-02-09 04:30:16.715696 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-09 04:30:16.715709 | orchestrator | Monday 09 February 2026 04:29:48 +0000 (0:00:10.625) 0:01:20.910 ******* 2026-02-09 04:30:16.715721 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:30:16.715734 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:30:16.715748 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:30:16.715760 | orchestrator | 2026-02-09 04:30:16.715773 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-09 04:30:16.715785 | orchestrator | Monday 09 February 2026 04:29:56 +0000 (0:00:08.110) 0:01:29.021 ******* 2026-02-09 04:30:16.715800 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:30:16.715812 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:30:16.715825 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:30:16.715837 | orchestrator | 2026-02-09 04:30:16.715850 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-09 04:30:16.715864 | orchestrator | Monday 09 February 2026 04:30:06 +0000 (0:00:09.687) 0:01:38.708 ******* 2026-02-09 04:30:16.715876 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:30:16.715889 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:30:16.715902 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:30:16.715915 | orchestrator | 2026-02-09 04:30:16.715928 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:30:16.715943 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 04:30:16.715958 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 04:30:16.715972 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 04:30:16.715986 | orchestrator | 2026-02-09 04:30:16.715999 | orchestrator | 2026-02-09 04:30:16.716013 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:30:16.716027 | orchestrator | Monday 09 February 2026 04:30:16 +0000 (0:00:10.290) 0:01:48.998 ******* 2026-02-09 04:30:16.716040 | orchestrator | =============================================================================== 2026-02-09 04:30:16.716053 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 12.25s 2026-02-09 04:30:16.716068 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 10.63s 2026-02-09 04:30:16.716111 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.29s 2026-02-09 04:30:16.716146 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 9.69s 2026-02-09 04:30:16.716159 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.99s 2026-02-09 04:30:16.716173 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 8.11s 2026-02-09 04:30:16.716186 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.14s 2026-02-09 04:30:16.716198 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.10s 2026-02-09 04:30:16.716210 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.06s 2026-02-09 04:30:16.716223 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.73s 2026-02-09 04:30:16.716235 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.58s 2026-02-09 04:30:16.716248 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.56s 2026-02-09 04:30:16.716260 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.24s 2026-02-09 04:30:16.716272 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.21s 2026-02-09 04:30:16.716285 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.01s 2026-02-09 04:30:16.716297 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.10s 2026-02-09 04:30:16.716309 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.03s 2026-02-09 04:30:16.716322 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 1.92s 2026-02-09 04:30:16.716335 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.85s 2026-02-09 04:30:16.716348 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.09s 2026-02-09 04:30:19.464030 | orchestrator | 2026-02-09 04:30:19 | INFO  | Task 4906d02e-e965-402c-ba16-3a26ade27e8d (kolla-ceph-rgw) was prepared for execution. 2026-02-09 04:30:19.464128 | orchestrator | 2026-02-09 04:30:19 | INFO  | It takes a moment until task 4906d02e-e965-402c-ba16-3a26ade27e8d (kolla-ceph-rgw) has been started and output is visible here. 2026-02-09 04:30:56.344228 | orchestrator | 2026-02-09 04:30:56.344361 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:30:56.344388 | orchestrator | 2026-02-09 04:30:56.344409 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:30:56.344428 | orchestrator | Monday 09 February 2026 04:30:23 +0000 (0:00:00.292) 0:00:00.292 ******* 2026-02-09 04:30:56.344448 | orchestrator | ok: [testbed-manager] 2026-02-09 04:30:56.344468 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:30:56.344486 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:30:56.344505 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:30:56.344525 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:30:56.344544 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:30:56.344562 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:30:56.344581 | orchestrator | 2026-02-09 04:30:56.344599 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:30:56.344640 | orchestrator | Monday 09 February 2026 04:30:24 +0000 (0:00:00.888) 0:00:01.181 ******* 2026-02-09 04:30:56.344660 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-09 04:30:56.344709 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-09 04:30:56.344730 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-09 04:30:56.344751 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-09 04:30:56.344771 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-09 04:30:56.344790 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-09 04:30:56.344808 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-09 04:30:56.344827 | orchestrator | 2026-02-09 04:30:56.344846 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-09 04:30:56.344884 | orchestrator | 2026-02-09 04:30:56.344898 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-09 04:30:56.344911 | orchestrator | Monday 09 February 2026 04:30:25 +0000 (0:00:00.819) 0:00:02.000 ******* 2026-02-09 04:30:56.344925 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 04:30:56.344940 | orchestrator | 2026-02-09 04:30:56.344953 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-09 04:30:56.344965 | orchestrator | Monday 09 February 2026 04:30:27 +0000 (0:00:01.638) 0:00:03.638 ******* 2026-02-09 04:30:56.344979 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-09 04:30:56.344993 | orchestrator | 2026-02-09 04:30:56.345006 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-09 04:30:56.345019 | orchestrator | Monday 09 February 2026 04:30:31 +0000 (0:00:03.795) 0:00:07.434 ******* 2026-02-09 04:30:56.345032 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-09 04:30:56.345047 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-09 04:30:56.345059 | orchestrator | 2026-02-09 04:30:56.345072 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-09 04:30:56.345084 | orchestrator | Monday 09 February 2026 04:30:37 +0000 (0:00:06.482) 0:00:13.917 ******* 2026-02-09 04:30:56.345097 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-09 04:30:56.345109 | orchestrator | 2026-02-09 04:30:56.345122 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-09 04:30:56.345134 | orchestrator | Monday 09 February 2026 04:30:40 +0000 (0:00:03.366) 0:00:17.283 ******* 2026-02-09 04:30:56.345144 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:30:56.345155 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-09 04:30:56.345166 | orchestrator | 2026-02-09 04:30:56.345176 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-09 04:30:56.345187 | orchestrator | Monday 09 February 2026 04:30:44 +0000 (0:00:03.728) 0:00:21.012 ******* 2026-02-09 04:30:56.345198 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-09 04:30:56.345209 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-09 04:30:56.345220 | orchestrator | 2026-02-09 04:30:56.345230 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-09 04:30:56.345241 | orchestrator | Monday 09 February 2026 04:30:50 +0000 (0:00:06.280) 0:00:27.293 ******* 2026-02-09 04:30:56.345252 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-09 04:30:56.345262 | orchestrator | 2026-02-09 04:30:56.345273 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:30:56.345284 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:30:56.345295 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:30:56.345309 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:30:56.345328 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:30:56.345345 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:30:56.345388 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:30:56.345418 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:30:56.345431 | orchestrator | 2026-02-09 04:30:56.345441 | orchestrator | 2026-02-09 04:30:56.345452 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:30:56.345463 | orchestrator | Monday 09 February 2026 04:30:55 +0000 (0:00:04.903) 0:00:32.197 ******* 2026-02-09 04:30:56.345473 | orchestrator | =============================================================================== 2026-02-09 04:30:56.345484 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.48s 2026-02-09 04:30:56.345494 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.28s 2026-02-09 04:30:56.345513 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.90s 2026-02-09 04:30:56.345524 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.80s 2026-02-09 04:30:56.345535 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.73s 2026-02-09 04:30:56.345545 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.37s 2026-02-09 04:30:56.345556 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.64s 2026-02-09 04:30:56.345567 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.89s 2026-02-09 04:30:56.345578 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-02-09 04:30:58.869902 | orchestrator | 2026-02-09 04:30:58 | INFO  | Task 26073407-0366-43d8-8a3b-a521288d8922 (gnocchi) was prepared for execution. 2026-02-09 04:30:58.870001 | orchestrator | 2026-02-09 04:30:58 | INFO  | It takes a moment until task 26073407-0366-43d8-8a3b-a521288d8922 (gnocchi) has been started and output is visible here. 2026-02-09 04:31:04.390652 | orchestrator | 2026-02-09 04:31:04.390794 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:31:04.390806 | orchestrator | 2026-02-09 04:31:04.390812 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:31:04.390817 | orchestrator | Monday 09 February 2026 04:31:03 +0000 (0:00:00.273) 0:00:00.273 ******* 2026-02-09 04:31:04.390822 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:31:04.390828 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:31:04.390832 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:31:04.390837 | orchestrator | 2026-02-09 04:31:04.390842 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:31:04.390846 | orchestrator | Monday 09 February 2026 04:31:03 +0000 (0:00:00.335) 0:00:00.609 ******* 2026-02-09 04:31:04.390851 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-09 04:31:04.390856 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-09 04:31:04.390861 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-09 04:31:04.390866 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-09 04:31:04.390870 | orchestrator | 2026-02-09 04:31:04.390874 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-09 04:31:04.390879 | orchestrator | skipping: no hosts matched 2026-02-09 04:31:04.390884 | orchestrator | 2026-02-09 04:31:04.390889 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:31:04.390894 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:31:04.390899 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:31:04.390904 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:31:04.390908 | orchestrator | 2026-02-09 04:31:04.390913 | orchestrator | 2026-02-09 04:31:04.390936 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:31:04.390941 | orchestrator | Monday 09 February 2026 04:31:04 +0000 (0:00:00.396) 0:00:01.005 ******* 2026-02-09 04:31:04.390945 | orchestrator | =============================================================================== 2026-02-09 04:31:04.390950 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-02-09 04:31:04.390954 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-09 04:31:06.924931 | orchestrator | 2026-02-09 04:31:06 | INFO  | Task b7890b9f-9d42-453a-9f1d-8dd1e393dd34 (manila) was prepared for execution. 2026-02-09 04:31:06.925044 | orchestrator | 2026-02-09 04:31:06 | INFO  | It takes a moment until task b7890b9f-9d42-453a-9f1d-8dd1e393dd34 (manila) has been started and output is visible here. 2026-02-09 04:31:46.760483 | orchestrator | 2026-02-09 04:31:46.760574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:31:46.760584 | orchestrator | 2026-02-09 04:31:46.760591 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:31:46.760598 | orchestrator | Monday 09 February 2026 04:31:11 +0000 (0:00:00.337) 0:00:00.337 ******* 2026-02-09 04:31:46.760605 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:31:46.760613 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:31:46.760619 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:31:46.760625 | orchestrator | 2026-02-09 04:31:46.760632 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:31:46.760638 | orchestrator | Monday 09 February 2026 04:31:11 +0000 (0:00:00.332) 0:00:00.670 ******* 2026-02-09 04:31:46.760644 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-09 04:31:46.760651 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-09 04:31:46.760657 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-09 04:31:46.760663 | orchestrator | 2026-02-09 04:31:46.760669 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-09 04:31:46.760675 | orchestrator | 2026-02-09 04:31:46.760682 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-09 04:31:46.760688 | orchestrator | Monday 09 February 2026 04:31:12 +0000 (0:00:00.470) 0:00:01.140 ******* 2026-02-09 04:31:46.760694 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:31:46.760701 | orchestrator | 2026-02-09 04:31:46.760719 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-09 04:31:46.760725 | orchestrator | Monday 09 February 2026 04:31:12 +0000 (0:00:00.594) 0:00:01.735 ******* 2026-02-09 04:31:46.760732 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:31:46.760739 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:31:46.760745 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:31:46.760752 | orchestrator | 2026-02-09 04:31:46.760758 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-09 04:31:46.760764 | orchestrator | Monday 09 February 2026 04:31:13 +0000 (0:00:00.510) 0:00:02.245 ******* 2026-02-09 04:31:46.760770 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-09 04:31:46.760776 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-09 04:31:46.760783 | orchestrator | 2026-02-09 04:31:46.760789 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-09 04:31:46.760795 | orchestrator | Monday 09 February 2026 04:31:19 +0000 (0:00:06.080) 0:00:08.326 ******* 2026-02-09 04:31:46.760801 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-09 04:31:46.760808 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-09 04:31:46.760814 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-09 04:31:46.760836 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-09 04:31:46.760842 | orchestrator | 2026-02-09 04:31:46.760848 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-09 04:31:46.760854 | orchestrator | Monday 09 February 2026 04:31:31 +0000 (0:00:11.897) 0:00:20.223 ******* 2026-02-09 04:31:46.760861 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 04:31:46.760867 | orchestrator | 2026-02-09 04:31:46.760873 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-09 04:31:46.760879 | orchestrator | Monday 09 February 2026 04:31:34 +0000 (0:00:03.020) 0:00:23.243 ******* 2026-02-09 04:31:46.760885 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:31:46.760891 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-09 04:31:46.760897 | orchestrator | 2026-02-09 04:31:46.760903 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-09 04:31:46.760945 | orchestrator | Monday 09 February 2026 04:31:37 +0000 (0:00:03.522) 0:00:26.765 ******* 2026-02-09 04:31:46.760952 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 04:31:46.760958 | orchestrator | 2026-02-09 04:31:46.760963 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-09 04:31:46.760969 | orchestrator | Monday 09 February 2026 04:31:40 +0000 (0:00:02.952) 0:00:29.718 ******* 2026-02-09 04:31:46.760975 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-09 04:31:46.760981 | orchestrator | 2026-02-09 04:31:46.760987 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-09 04:31:46.760992 | orchestrator | Monday 09 February 2026 04:31:44 +0000 (0:00:03.594) 0:00:33.312 ******* 2026-02-09 04:31:46.761013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:31:46.761023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:31:46.761033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:31:46.761045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:46.761054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:46.761061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:46.761074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:57.492493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:57.492679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:57.492753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:57.492777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:57.492796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:57.492815 | orchestrator | 2026-02-09 04:31:57.492836 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-09 04:31:57.492857 | orchestrator | Monday 09 February 2026 04:31:46 +0000 (0:00:02.403) 0:00:35.715 ******* 2026-02-09 04:31:57.492876 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:31:57.492895 | orchestrator | 2026-02-09 04:31:57.492914 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-09 04:31:57.492931 | orchestrator | Monday 09 February 2026 04:31:47 +0000 (0:00:00.559) 0:00:36.275 ******* 2026-02-09 04:31:57.492949 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:31:57.493004 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:31:57.493024 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:31:57.493043 | orchestrator | 2026-02-09 04:31:57.493061 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-09 04:31:57.493079 | orchestrator | Monday 09 February 2026 04:31:48 +0000 (0:00:01.022) 0:00:37.297 ******* 2026-02-09 04:31:57.493098 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-09 04:31:57.493146 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-09 04:31:57.493166 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-09 04:31:57.493183 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-09 04:31:57.493216 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-09 04:31:57.493247 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-09 04:31:57.493265 | orchestrator | 2026-02-09 04:31:57.493283 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-09 04:31:57.493301 | orchestrator | Monday 09 February 2026 04:31:50 +0000 (0:00:01.730) 0:00:39.028 ******* 2026-02-09 04:31:57.493320 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-09 04:31:57.493338 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-09 04:31:57.493357 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-09 04:31:57.493375 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-09 04:31:57.493393 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-09 04:31:57.493412 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-09 04:31:57.493429 | orchestrator | 2026-02-09 04:31:57.493447 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-09 04:31:57.493465 | orchestrator | Monday 09 February 2026 04:31:51 +0000 (0:00:01.208) 0:00:40.236 ******* 2026-02-09 04:31:57.493484 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-09 04:31:57.493503 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-09 04:31:57.493522 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-09 04:31:57.493540 | orchestrator | 2026-02-09 04:31:57.493559 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-09 04:31:57.493579 | orchestrator | Monday 09 February 2026 04:31:52 +0000 (0:00:00.735) 0:00:40.972 ******* 2026-02-09 04:31:57.493598 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:31:57.493616 | orchestrator | 2026-02-09 04:31:57.493634 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-09 04:31:57.493653 | orchestrator | Monday 09 February 2026 04:31:52 +0000 (0:00:00.128) 0:00:41.100 ******* 2026-02-09 04:31:57.493669 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:31:57.493686 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:31:57.493704 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:31:57.493724 | orchestrator | 2026-02-09 04:31:57.493743 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-09 04:31:57.493762 | orchestrator | Monday 09 February 2026 04:31:52 +0000 (0:00:00.573) 0:00:41.674 ******* 2026-02-09 04:31:57.493781 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:31:57.493800 | orchestrator | 2026-02-09 04:31:57.493817 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-09 04:31:57.493834 | orchestrator | Monday 09 February 2026 04:31:53 +0000 (0:00:00.669) 0:00:42.343 ******* 2026-02-09 04:31:57.493875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:31:58.459598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:31:58.459699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:31:58.459715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:58.459729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:58.459740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:58.459790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:58.459810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:58.459822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:58.459834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:58.459845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:58.459856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:31:58.459875 | orchestrator | 2026-02-09 04:31:58.459888 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-09 04:31:58.459901 | orchestrator | Monday 09 February 2026 04:31:57 +0000 (0:00:04.104) 0:00:46.447 ******* 2026-02-09 04:31:58.459919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 04:31:59.151199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:31:59.151327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:31:59.151345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 04:31:59.151359 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:31:59.151373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 04:31:59.151409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:31:59.151421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:31:59.151456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 04:31:59.151469 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:31:59.151480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 04:31:59.151492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:31:59.151503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:31:59.151523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 04:31:59.151534 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:31:59.151545 | orchestrator | 2026-02-09 04:31:59.151557 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-09 04:31:59.151570 | orchestrator | Monday 09 February 2026 04:31:58 +0000 (0:00:00.969) 0:00:47.417 ******* 2026-02-09 04:31:59.151589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 04:32:03.535649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:32:03.535769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:32:03.535787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 04:32:03.535821 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:32:03.535836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 04:32:03.535849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:32:03.535860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:32:03.535895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 04:32:03.535908 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:32:03.535920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 04:32:03.535940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:32:03.535952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:32:03.535963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 04:32:03.535975 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:32:03.536039 | orchestrator | 2026-02-09 04:32:03.536052 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-09 04:32:03.536065 | orchestrator | Monday 09 February 2026 04:31:59 +0000 (0:00:00.896) 0:00:48.313 ******* 2026-02-09 04:32:03.536092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:32:10.530625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:32:10.530740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:32:10.530779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:10.530792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:10.530802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:10.530842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:10.530856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:10.530874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:10.530885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:10.530895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:10.530905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:10.530916 | orchestrator | 2026-02-09 04:32:10.530927 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-09 04:32:10.530939 | orchestrator | Monday 09 February 2026 04:32:03 +0000 (0:00:04.405) 0:00:52.719 ******* 2026-02-09 04:32:10.530961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:32:15.096802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:32:15.096959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:32:15.096980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:15.096994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:32:15.097007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:15.097093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:32:15.097124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:15.097143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:32:15.097155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:15.097167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:15.097178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:15.097190 | orchestrator | 2026-02-09 04:32:15.097203 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-09 04:32:15.097246 | orchestrator | Monday 09 February 2026 04:32:10 +0000 (0:00:06.750) 0:00:59.469 ******* 2026-02-09 04:32:15.097258 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-09 04:32:15.097269 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-09 04:32:15.097280 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-09 04:32:15.097291 | orchestrator | 2026-02-09 04:32:15.097307 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-09 04:32:15.097321 | orchestrator | Monday 09 February 2026 04:32:14 +0000 (0:00:03.871) 0:01:03.341 ******* 2026-02-09 04:32:15.097353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 04:32:18.446937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:32:18.447093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:32:18.447115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 04:32:18.447129 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:32:18.447144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 04:32:18.447174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:32:18.447206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:32:18.447236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 04:32:18.447249 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:32:18.447268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-09 04:32:18.447287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 04:32:18.447298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 04:32:18.447316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 04:32:18.447339 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:32:18.447351 | orchestrator | 2026-02-09 04:32:18.447364 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-09 04:32:18.447376 | orchestrator | Monday 09 February 2026 04:32:15 +0000 (0:00:00.698) 0:01:04.039 ******* 2026-02-09 04:32:18.447396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:32:57.128192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:32:57.128363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-09 04:32:57.128380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:57.128428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:57.128439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:57.128466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:57.128479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:57.128489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:57.128499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:57.128509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:57.128531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-09 04:32:57.128542 | orchestrator | 2026-02-09 04:32:57.128554 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-09 04:32:57.128565 | orchestrator | Monday 09 February 2026 04:32:18 +0000 (0:00:03.371) 0:01:07.411 ******* 2026-02-09 04:32:57.128575 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:32:57.128585 | orchestrator | 2026-02-09 04:32:57.128595 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-09 04:32:57.128605 | orchestrator | Monday 09 February 2026 04:32:20 +0000 (0:00:02.043) 0:01:09.454 ******* 2026-02-09 04:32:57.128614 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:32:57.128624 | orchestrator | 2026-02-09 04:32:57.128634 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-09 04:32:57.128643 | orchestrator | Monday 09 February 2026 04:32:22 +0000 (0:00:02.048) 0:01:11.502 ******* 2026-02-09 04:32:57.128653 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:32:57.128662 | orchestrator | 2026-02-09 04:32:57.128671 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-09 04:32:57.128681 | orchestrator | Monday 09 February 2026 04:32:56 +0000 (0:00:34.234) 0:01:45.736 ******* 2026-02-09 04:32:57.128690 | orchestrator | 2026-02-09 04:32:57.128706 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-09 04:33:43.073707 | orchestrator | Monday 09 February 2026 04:32:56 +0000 (0:00:00.074) 0:01:45.811 ******* 2026-02-09 04:33:43.073844 | orchestrator | 2026-02-09 04:33:43.073871 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-09 04:33:43.073890 | orchestrator | Monday 09 February 2026 04:32:57 +0000 (0:00:00.075) 0:01:45.887 ******* 2026-02-09 04:33:43.073909 | orchestrator | 2026-02-09 04:33:43.073926 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-09 04:33:43.073944 | orchestrator | Monday 09 February 2026 04:32:57 +0000 (0:00:00.075) 0:01:45.962 ******* 2026-02-09 04:33:43.073963 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:33:43.073982 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:33:43.074000 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:33:43.074101 | orchestrator | 2026-02-09 04:33:43.074121 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-09 04:33:43.074140 | orchestrator | Monday 09 February 2026 04:33:12 +0000 (0:00:15.250) 0:02:01.213 ******* 2026-02-09 04:33:43.074201 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:33:43.074221 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:33:43.074240 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:33:43.074258 | orchestrator | 2026-02-09 04:33:43.074275 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-09 04:33:43.074294 | orchestrator | Monday 09 February 2026 04:33:18 +0000 (0:00:06.064) 0:02:07.278 ******* 2026-02-09 04:33:43.074348 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:33:43.074367 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:33:43.074414 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:33:43.074433 | orchestrator | 2026-02-09 04:33:43.074451 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-09 04:33:43.074467 | orchestrator | Monday 09 February 2026 04:33:28 +0000 (0:00:10.488) 0:02:17.766 ******* 2026-02-09 04:33:43.074486 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:33:43.074504 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:33:43.074522 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:33:43.074538 | orchestrator | 2026-02-09 04:33:43.074554 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:33:43.074571 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 04:33:43.074589 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 04:33:43.074605 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 04:33:43.074621 | orchestrator | 2026-02-09 04:33:43.074637 | orchestrator | 2026-02-09 04:33:43.074653 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:33:43.074670 | orchestrator | Monday 09 February 2026 04:33:42 +0000 (0:00:13.634) 0:02:31.401 ******* 2026-02-09 04:33:43.074686 | orchestrator | =============================================================================== 2026-02-09 04:33:43.074705 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 34.23s 2026-02-09 04:33:43.074721 | orchestrator | manila : Restart manila-api container ---------------------------------- 15.25s 2026-02-09 04:33:43.074737 | orchestrator | manila : Restart manila-share container -------------------------------- 13.63s 2026-02-09 04:33:43.074753 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 11.90s 2026-02-09 04:33:43.074769 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.49s 2026-02-09 04:33:43.074786 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.75s 2026-02-09 04:33:43.074822 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.08s 2026-02-09 04:33:43.074840 | orchestrator | manila : Restart manila-data container ---------------------------------- 6.06s 2026-02-09 04:33:43.074856 | orchestrator | manila : Copying over config.json files for services -------------------- 4.41s 2026-02-09 04:33:43.074872 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.10s 2026-02-09 04:33:43.074889 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.87s 2026-02-09 04:33:43.074905 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.59s 2026-02-09 04:33:43.074921 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.52s 2026-02-09 04:33:43.074937 | orchestrator | manila : Check manila containers ---------------------------------------- 3.37s 2026-02-09 04:33:43.074953 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.02s 2026-02-09 04:33:43.074969 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 2.95s 2026-02-09 04:33:43.074987 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.40s 2026-02-09 04:33:43.075004 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.05s 2026-02-09 04:33:43.075022 | orchestrator | manila : Creating Manila database --------------------------------------- 2.04s 2026-02-09 04:33:43.075039 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.73s 2026-02-09 04:33:43.466494 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-09 04:33:55.670419 | orchestrator | 2026-02-09 04:33:55 | INFO  | Task eb656c89-b9cc-4dad-bcb3-c6ef7cfdeeec (netdata) was prepared for execution. 2026-02-09 04:33:55.670605 | orchestrator | 2026-02-09 04:33:55 | INFO  | It takes a moment until task eb656c89-b9cc-4dad-bcb3-c6ef7cfdeeec (netdata) has been started and output is visible here. 2026-02-09 04:35:28.454366 | orchestrator | 2026-02-09 04:35:28.454503 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:35:28.454519 | orchestrator | 2026-02-09 04:35:28.454532 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:35:28.454543 | orchestrator | Monday 09 February 2026 04:34:00 +0000 (0:00:00.256) 0:00:00.256 ******* 2026-02-09 04:35:28.454554 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-09 04:35:28.454566 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-09 04:35:28.454577 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-09 04:35:28.454588 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-09 04:35:28.454599 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-09 04:35:28.454609 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-09 04:35:28.454620 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-09 04:35:28.454631 | orchestrator | 2026-02-09 04:35:28.454641 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-09 04:35:28.454652 | orchestrator | 2026-02-09 04:35:28.454663 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-09 04:35:28.454674 | orchestrator | Monday 09 February 2026 04:34:01 +0000 (0:00:00.937) 0:00:01.193 ******* 2026-02-09 04:35:28.454687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 04:35:28.454700 | orchestrator | 2026-02-09 04:35:28.454711 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-09 04:35:28.454722 | orchestrator | Monday 09 February 2026 04:34:03 +0000 (0:00:01.415) 0:00:02.608 ******* 2026-02-09 04:35:28.454733 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:35:28.454777 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:35:28.454788 | orchestrator | ok: [testbed-manager] 2026-02-09 04:35:28.454799 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:35:28.454810 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:35:28.454821 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:35:28.454831 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:35:28.454842 | orchestrator | 2026-02-09 04:35:28.454853 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-09 04:35:28.454867 | orchestrator | Monday 09 February 2026 04:34:04 +0000 (0:00:01.973) 0:00:04.581 ******* 2026-02-09 04:35:28.454880 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:35:28.454892 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:35:28.454904 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:35:28.454916 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:35:28.454928 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:35:28.454940 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:35:28.454954 | orchestrator | ok: [testbed-manager] 2026-02-09 04:35:28.454966 | orchestrator | 2026-02-09 04:35:28.454978 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-09 04:35:28.454991 | orchestrator | Monday 09 February 2026 04:34:07 +0000 (0:00:02.262) 0:00:06.844 ******* 2026-02-09 04:35:28.455003 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:35:28.455017 | orchestrator | changed: [testbed-manager] 2026-02-09 04:35:28.455029 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:35:28.455041 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:35:28.455053 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:35:28.455066 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:35:28.455078 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:35:28.455114 | orchestrator | 2026-02-09 04:35:28.455127 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-09 04:35:28.455140 | orchestrator | Monday 09 February 2026 04:34:08 +0000 (0:00:01.590) 0:00:08.434 ******* 2026-02-09 04:35:28.455152 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:35:28.455164 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:35:28.455189 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:35:28.455202 | orchestrator | changed: [testbed-manager] 2026-02-09 04:35:28.455215 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:35:28.455226 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:35:28.455237 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:35:28.455247 | orchestrator | 2026-02-09 04:35:28.455258 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-09 04:35:28.455269 | orchestrator | Monday 09 February 2026 04:34:23 +0000 (0:00:14.508) 0:00:22.943 ******* 2026-02-09 04:35:28.455279 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:35:28.455290 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:35:28.455300 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:35:28.455311 | orchestrator | changed: [testbed-manager] 2026-02-09 04:35:28.455321 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:35:28.455331 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:35:28.455342 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:35:28.455352 | orchestrator | 2026-02-09 04:35:28.455363 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-09 04:35:28.455374 | orchestrator | Monday 09 February 2026 04:35:01 +0000 (0:00:38.018) 0:01:00.962 ******* 2026-02-09 04:35:28.455385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 04:35:28.455398 | orchestrator | 2026-02-09 04:35:28.455408 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-09 04:35:28.455419 | orchestrator | Monday 09 February 2026 04:35:02 +0000 (0:00:01.609) 0:01:02.571 ******* 2026-02-09 04:35:28.455430 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-09 04:35:28.455441 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-09 04:35:28.455452 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-09 04:35:28.455463 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-09 04:35:28.455490 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-09 04:35:28.455501 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-09 04:35:28.455512 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-09 04:35:28.455523 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-09 04:35:28.455534 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-09 04:35:28.455544 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-09 04:35:28.455555 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-09 04:35:28.455566 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-09 04:35:28.455576 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-09 04:35:28.455586 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-09 04:35:28.455597 | orchestrator | 2026-02-09 04:35:28.455608 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-09 04:35:28.455620 | orchestrator | Monday 09 February 2026 04:35:06 +0000 (0:00:03.837) 0:01:06.408 ******* 2026-02-09 04:35:28.455630 | orchestrator | ok: [testbed-manager] 2026-02-09 04:35:28.455641 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:35:28.455652 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:35:28.455662 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:35:28.455673 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:35:28.455683 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:35:28.455703 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:35:28.455714 | orchestrator | 2026-02-09 04:35:28.455725 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-09 04:35:28.455775 | orchestrator | Monday 09 February 2026 04:35:08 +0000 (0:00:01.387) 0:01:07.795 ******* 2026-02-09 04:35:28.455797 | orchestrator | changed: [testbed-manager] 2026-02-09 04:35:28.455815 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:35:28.455833 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:35:28.455846 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:35:28.455856 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:35:28.455867 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:35:28.455878 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:35:28.455889 | orchestrator | 2026-02-09 04:35:28.455900 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-09 04:35:28.455911 | orchestrator | Monday 09 February 2026 04:35:09 +0000 (0:00:01.342) 0:01:09.138 ******* 2026-02-09 04:35:28.455922 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:35:28.455932 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:35:28.455943 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:35:28.455954 | orchestrator | ok: [testbed-manager] 2026-02-09 04:35:28.455965 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:35:28.455976 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:35:28.455986 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:35:28.455997 | orchestrator | 2026-02-09 04:35:28.456008 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-09 04:35:28.456019 | orchestrator | Monday 09 February 2026 04:35:10 +0000 (0:00:01.284) 0:01:10.423 ******* 2026-02-09 04:35:28.456029 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:35:28.456040 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:35:28.456051 | orchestrator | ok: [testbed-manager] 2026-02-09 04:35:28.456062 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:35:28.456072 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:35:28.456083 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:35:28.456094 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:35:28.456104 | orchestrator | 2026-02-09 04:35:28.456115 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-09 04:35:28.456126 | orchestrator | Monday 09 February 2026 04:35:13 +0000 (0:00:02.192) 0:01:12.616 ******* 2026-02-09 04:35:28.456137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-09 04:35:28.456157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 04:35:28.456168 | orchestrator | 2026-02-09 04:35:28.456179 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-09 04:35:28.456190 | orchestrator | Monday 09 February 2026 04:35:14 +0000 (0:00:01.502) 0:01:14.119 ******* 2026-02-09 04:35:28.456201 | orchestrator | changed: [testbed-manager] 2026-02-09 04:35:28.456212 | orchestrator | 2026-02-09 04:35:28.456222 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-09 04:35:28.456233 | orchestrator | Monday 09 February 2026 04:35:16 +0000 (0:00:02.244) 0:01:16.363 ******* 2026-02-09 04:35:28.456244 | orchestrator | changed: [testbed-manager] 2026-02-09 04:35:28.456255 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:35:28.456266 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:35:28.456276 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:35:28.456287 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:35:28.456298 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:35:28.456308 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:35:28.456319 | orchestrator | 2026-02-09 04:35:28.456330 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:35:28.456341 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:35:28.456361 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:35:28.456373 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:35:28.456384 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:35:28.456402 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:35:28.921911 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:35:28.922003 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:35:28.922060 | orchestrator | 2026-02-09 04:35:28.922073 | orchestrator | 2026-02-09 04:35:28.922083 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:35:28.922094 | orchestrator | Monday 09 February 2026 04:35:28 +0000 (0:00:11.666) 0:01:28.029 ******* 2026-02-09 04:35:28.922103 | orchestrator | =============================================================================== 2026-02-09 04:35:28.922112 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 38.02s 2026-02-09 04:35:28.922121 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.51s 2026-02-09 04:35:28.922130 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.67s 2026-02-09 04:35:28.922139 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.84s 2026-02-09 04:35:28.922147 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.26s 2026-02-09 04:35:28.922156 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.24s 2026-02-09 04:35:28.922164 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.19s 2026-02-09 04:35:28.922173 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.97s 2026-02-09 04:35:28.922181 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.61s 2026-02-09 04:35:28.922190 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.59s 2026-02-09 04:35:28.922198 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.50s 2026-02-09 04:35:28.922207 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.42s 2026-02-09 04:35:28.922215 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.39s 2026-02-09 04:35:28.922225 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.34s 2026-02-09 04:35:28.922234 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.28s 2026-02-09 04:35:28.922243 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2026-02-09 04:35:31.559227 | orchestrator | 2026-02-09 04:35:31 | INFO  | Task 0198b937-8049-44ca-95da-ea165bbc2a3f (prometheus) was prepared for execution. 2026-02-09 04:35:31.559433 | orchestrator | 2026-02-09 04:35:31 | INFO  | It takes a moment until task 0198b937-8049-44ca-95da-ea165bbc2a3f (prometheus) has been started and output is visible here. 2026-02-09 04:35:41.628390 | orchestrator | 2026-02-09 04:35:41.628527 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:35:41.628553 | orchestrator | 2026-02-09 04:35:41.628573 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:35:41.628592 | orchestrator | Monday 09 February 2026 04:35:36 +0000 (0:00:00.284) 0:00:00.284 ******* 2026-02-09 04:35:41.628610 | orchestrator | ok: [testbed-manager] 2026-02-09 04:35:41.628662 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:35:41.628681 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:35:41.628698 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:35:41.628717 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:35:41.628733 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:35:41.628770 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:35:41.628857 | orchestrator | 2026-02-09 04:35:41.628877 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:35:41.628895 | orchestrator | Monday 09 February 2026 04:35:36 +0000 (0:00:00.917) 0:00:01.202 ******* 2026-02-09 04:35:41.628912 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-09 04:35:41.628929 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-09 04:35:41.628946 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-09 04:35:41.628963 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-09 04:35:41.628979 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-09 04:35:41.628996 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-09 04:35:41.629013 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-09 04:35:41.629031 | orchestrator | 2026-02-09 04:35:41.629049 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-09 04:35:41.629067 | orchestrator | 2026-02-09 04:35:41.629085 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-09 04:35:41.629102 | orchestrator | Monday 09 February 2026 04:35:37 +0000 (0:00:00.924) 0:00:02.126 ******* 2026-02-09 04:35:41.629120 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 04:35:41.629139 | orchestrator | 2026-02-09 04:35:41.629156 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-09 04:35:41.629175 | orchestrator | Monday 09 February 2026 04:35:39 +0000 (0:00:01.391) 0:00:03.518 ******* 2026-02-09 04:35:41.629199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:41.629223 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-09 04:35:41.629244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:41.629264 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:41.629333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:41.629367 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:41.629387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:41.629407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:41.629426 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:41.629447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:41.629462 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:41.629497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:42.554736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:42.554953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:42.554980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:42.555002 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:42.555022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:42.555044 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-09 04:35:42.555121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:42.555151 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:35:42.555171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:42.555189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:42.555208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:35:42.555226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:42.555244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:42.555272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:42.555302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:35:47.791341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:47.791480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:47.791509 | orchestrator | 2026-02-09 04:35:47.791524 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-09 04:35:47.791537 | orchestrator | Monday 09 February 2026 04:35:42 +0000 (0:00:03.308) 0:00:06.826 ******* 2026-02-09 04:35:47.791549 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 04:35:47.791562 | orchestrator | 2026-02-09 04:35:47.791574 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-09 04:35:47.791585 | orchestrator | Monday 09 February 2026 04:35:44 +0000 (0:00:01.741) 0:00:08.568 ******* 2026-02-09 04:35:47.791596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:47.791608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:47.791652 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-09 04:35:47.791673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:47.791727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:47.791751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:47.791772 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:47.791791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:47.791841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:47.791867 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:47.791882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:47.791896 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:47.791926 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:49.598535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:49.598642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:49.598659 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:49.598697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:49.598710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:49.598722 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:35:49.598735 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:35:49.598779 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:35:49.598795 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-09 04:35:49.598870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:49.598884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:49.598895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:49.598907 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:49.598919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:49.598940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:50.681370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:50.681452 | orchestrator | 2026-02-09 04:35:50.681463 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-09 04:35:50.681489 | orchestrator | Monday 09 February 2026 04:35:49 +0000 (0:00:05.299) 0:00:13.867 ******* 2026-02-09 04:35:50.681499 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-09 04:35:50.681507 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:50.681515 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:50.681556 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-09 04:35:50.681577 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:50.681585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:50.681597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:50.681604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:50.681611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:50.681617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:50.681624 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:35:50.681632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:50.681642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:50.681653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:51.478394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:51.478493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:51.478510 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:35:51.478524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:51.478537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:51.478548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:51.478577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:51.478589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:51.478622 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:35:51.478634 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:35:51.478663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:51.478676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:51.478687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 04:35:51.478699 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:35:51.478710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:51.478722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:51.478733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 04:35:51.478744 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:35:51.478761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:51.478787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:52.419992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 04:35:52.420076 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:35:52.420089 | orchestrator | 2026-02-09 04:35:52.420098 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-09 04:35:52.420107 | orchestrator | Monday 09 February 2026 04:35:51 +0000 (0:00:01.879) 0:00:15.746 ******* 2026-02-09 04:35:52.420114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:52.420123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:52.420131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:52.420140 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-09 04:35:52.420163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:52.420206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:52.420215 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:52.420223 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:52.420232 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-09 04:35:52.420242 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:52.420253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:52.420268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:52.420282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:53.697071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:53.697173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:53.697189 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:35:53.697202 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:35:53.697212 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:35:53.697222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:53.697233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:53.697244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:53.697290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:53.697301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 04:35:53.697311 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:35:53.697339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:53.697350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:53.697360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 04:35:53.697370 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:35:53.697380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:53.697390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:53.697441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 04:35:53.697453 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:35:53.697463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 04:35:53.697483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 04:35:57.508060 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 04:35:57.508194 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:35:57.508221 | orchestrator | 2026-02-09 04:35:57.508240 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-09 04:35:57.508259 | orchestrator | Monday 09 February 2026 04:35:53 +0000 (0:00:02.211) 0:00:17.958 ******* 2026-02-09 04:35:57.508279 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-09 04:35:57.508299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:57.508349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:57.508387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:57.508407 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:57.508446 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:57.508466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:57.508485 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:35:57.508503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:57.508520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:57.508549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:57.508574 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:57.508595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:57.508622 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:59.883192 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:59.883334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:59.883360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:59.883410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:59.883432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:35:59.883473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:35:59.883495 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:35:59.883541 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-09 04:35:59.883564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:59.883716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:59.883741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:35:59.883771 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:59.883792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:59.883811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:35:59.883869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:36:04.195304 | orchestrator | 2026-02-09 04:36:04.195379 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-09 04:36:04.195388 | orchestrator | Monday 09 February 2026 04:35:59 +0000 (0:00:06.187) 0:00:24.146 ******* 2026-02-09 04:36:04.195394 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 04:36:04.195400 | orchestrator | 2026-02-09 04:36:04.195405 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-09 04:36:04.195410 | orchestrator | Monday 09 February 2026 04:36:00 +0000 (0:00:00.954) 0:00:25.101 ******* 2026-02-09 04:36:04.195432 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8244963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195440 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8244963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195445 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8244963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195460 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087592, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8340015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195467 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087592, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8340015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195472 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8244963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:04.195488 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8244963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195500 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087592, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8340015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195505 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8244963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195510 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087529, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8232985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195517 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087592, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8340015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195522 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1087538, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8244963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195527 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087554, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8320017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:04.195536 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087529, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8232985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.056613 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087529, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8232985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.056727 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087592, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8340015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.056746 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087529, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8232985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.056777 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087554, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8320017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.056789 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087592, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8340015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.056802 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087523, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8190014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.056814 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087554, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8320017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.056954 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087554, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8320017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.056980 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087529, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8232985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.056997 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1087592, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8340015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:06.057026 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087523, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8190014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.057046 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087523, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8190014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.057062 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087529, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8232985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.057083 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087541, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:06.057104 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087523, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8190014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728403 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087554, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8320017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728498 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087541, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728529 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087541, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728541 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087541, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728553 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087554, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8320017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728583 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087553, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.826538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728595 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087523, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8190014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728625 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728637 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087541, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728652 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087523, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8190014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728663 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087553, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.826538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728675 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087553, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.826538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728693 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087553, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.826538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728705 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087553, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.826538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:07.728724 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.824277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.122567 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.122721 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.122751 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1087529, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8232985, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:09.122801 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087541, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.122815 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.122826 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.122838 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.824277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.122958 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087588, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.833876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.122996 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.824277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.123017 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087588, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.833876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.123050 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.824277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.123069 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087553, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.826538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.123089 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087588, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.833876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.123110 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.824277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:09.123143 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087512, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8181634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624229 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087512, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8181634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624320 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1087554, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8320017, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:10.624347 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087588, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.833876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624356 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624364 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087512, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8181634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624372 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087603, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8369358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624379 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087603, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8369358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624401 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087584, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8332243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624415 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087588, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.833876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624428 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087584, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8332243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624436 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087512, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8181634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624443 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087512, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8181634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624451 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.824277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624458 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087603, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8369358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:10.624471 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087522, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8211718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267412 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087603, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8369358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267523 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087584, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8332243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267540 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087522, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8211718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267552 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087603, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8369358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267564 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087588, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.833876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267575 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087584, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8332243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267587 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087519, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8188179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267645 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1087523, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8190014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:12.267659 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087522, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8211718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267671 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087512, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8181634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267682 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087584, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8332243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267693 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087519, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8188179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267705 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087522, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8211718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267716 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087522, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8211718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:12.267746 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087519, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8188179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523375 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087549, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8262174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523460 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087519, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8188179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523471 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087603, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8369358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523479 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087549, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8262174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523487 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087549, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8262174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523513 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087549, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8262174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523533 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087584, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8332243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523556 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087546, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.825456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523563 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087519, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8188179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523571 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1087541, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:13.523578 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087546, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.825456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523585 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087546, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.825456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523598 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087601, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8360016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523606 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:36:13.523619 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087546, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.825456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:13.523632 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087601, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8360016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:22.842272 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:36:22.842389 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087522, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8211718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:22.842408 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087601, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8360016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:22.842416 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:36:22.842424 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087549, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8262174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:22.842431 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087601, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8360016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:22.842453 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:36:22.842461 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087519, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8188179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:22.842479 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087546, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.825456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:22.842500 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1087553, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.826538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:22.842507 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087549, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8262174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:22.842513 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087601, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8360016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:22.842520 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:36:22.842526 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087546, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.825456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:22.842538 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087601, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8360016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-09 04:36:22.842544 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:36:22.842551 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1087545, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8250113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:22.842560 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1087536, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.824277, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:22.842572 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087588, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.833876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:49.645504 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087512, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8181634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:49.645641 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1087603, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8369358, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:49.645663 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1087584, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8332243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:49.645704 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1087522, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8211718, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:49.645718 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1087519, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8188179, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:49.645745 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1087549, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8262174, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:49.645757 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1087546, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.825456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:49.645788 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1087601, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8360016, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-09 04:36:49.645800 | orchestrator | 2026-02-09 04:36:49.645814 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-09 04:36:49.645826 | orchestrator | Monday 09 February 2026 04:36:28 +0000 (0:00:27.186) 0:00:52.287 ******* 2026-02-09 04:36:49.645838 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 04:36:49.645850 | orchestrator | 2026-02-09 04:36:49.645861 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-09 04:36:49.645872 | orchestrator | Monday 09 February 2026 04:36:28 +0000 (0:00:00.853) 0:00:53.141 ******* 2026-02-09 04:36:49.645892 | orchestrator | [WARNING]: Skipped 2026-02-09 04:36:49.645904 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.645916 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-09 04:36:49.645927 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.645937 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-09 04:36:49.645949 | orchestrator | [WARNING]: Skipped 2026-02-09 04:36:49.645960 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646012 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-09 04:36:49.646091 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646105 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-09 04:36:49.646119 | orchestrator | [WARNING]: Skipped 2026-02-09 04:36:49.646131 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646144 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-09 04:36:49.646157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646169 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-09 04:36:49.646182 | orchestrator | [WARNING]: Skipped 2026-02-09 04:36:49.646194 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646204 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-09 04:36:49.646215 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646225 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-09 04:36:49.646236 | orchestrator | [WARNING]: Skipped 2026-02-09 04:36:49.646247 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646258 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-09 04:36:49.646268 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646279 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-09 04:36:49.646290 | orchestrator | [WARNING]: Skipped 2026-02-09 04:36:49.646300 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646311 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-09 04:36:49.646321 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646332 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-09 04:36:49.646343 | orchestrator | [WARNING]: Skipped 2026-02-09 04:36:49.646365 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646383 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-09 04:36:49.646402 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-09 04:36:49.646421 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-09 04:36:49.646439 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 04:36:49.646457 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:36:49.646476 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-09 04:36:49.646495 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-09 04:36:49.646514 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-09 04:36:49.646530 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-09 04:36:49.646540 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-09 04:36:49.646551 | orchestrator | 2026-02-09 04:36:49.646562 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-09 04:36:49.646573 | orchestrator | Monday 09 February 2026 04:36:30 +0000 (0:00:01.978) 0:00:55.119 ******* 2026-02-09 04:36:49.646583 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-09 04:36:49.646604 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:36:49.646615 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-09 04:36:49.646627 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:36:49.646638 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-09 04:36:49.646649 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:36:49.646670 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-09 04:37:07.175979 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:37:07.176133 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-09 04:37:07.176150 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:37:07.176164 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-09 04:37:07.176176 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:37:07.176189 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-09 04:37:07.176201 | orchestrator | 2026-02-09 04:37:07.176214 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-09 04:37:07.176227 | orchestrator | Monday 09 February 2026 04:36:49 +0000 (0:00:18.797) 0:01:13.917 ******* 2026-02-09 04:37:07.176238 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-09 04:37:07.176251 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-09 04:37:07.176263 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:37:07.176275 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:37:07.176287 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-09 04:37:07.176300 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:37:07.176312 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-09 04:37:07.176325 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:37:07.176336 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-09 04:37:07.176349 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:37:07.176361 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-09 04:37:07.176373 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:37:07.176386 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-09 04:37:07.176398 | orchestrator | 2026-02-09 04:37:07.176410 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-09 04:37:07.176422 | orchestrator | Monday 09 February 2026 04:36:52 +0000 (0:00:02.950) 0:01:16.868 ******* 2026-02-09 04:37:07.176435 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-09 04:37:07.176448 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-09 04:37:07.176461 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:37:07.176474 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:37:07.176486 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-09 04:37:07.176499 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:37:07.176512 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-09 04:37:07.176526 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:37:07.176563 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-09 04:37:07.176576 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-09 04:37:07.176590 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:37:07.176616 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-09 04:37:07.176630 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:37:07.176643 | orchestrator | 2026-02-09 04:37:07.176657 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-09 04:37:07.176671 | orchestrator | Monday 09 February 2026 04:36:54 +0000 (0:00:01.868) 0:01:18.736 ******* 2026-02-09 04:37:07.176684 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 04:37:07.176697 | orchestrator | 2026-02-09 04:37:07.176709 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-09 04:37:07.176722 | orchestrator | Monday 09 February 2026 04:36:55 +0000 (0:00:00.762) 0:01:19.498 ******* 2026-02-09 04:37:07.176733 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:37:07.176746 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:37:07.176758 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:37:07.176770 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:37:07.176781 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:37:07.176792 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:37:07.176803 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:37:07.176814 | orchestrator | 2026-02-09 04:37:07.176826 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-09 04:37:07.176837 | orchestrator | Monday 09 February 2026 04:36:56 +0000 (0:00:00.809) 0:01:20.308 ******* 2026-02-09 04:37:07.176848 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:37:07.176858 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:37:07.176869 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:37:07.176880 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:37:07.176891 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:37:07.176902 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:37:07.176914 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:37:07.176924 | orchestrator | 2026-02-09 04:37:07.176935 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-09 04:37:07.176964 | orchestrator | Monday 09 February 2026 04:36:58 +0000 (0:00:02.218) 0:01:22.527 ******* 2026-02-09 04:37:07.176975 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-09 04:37:07.176985 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:37:07.176996 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-09 04:37:07.177007 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-09 04:37:07.177058 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-09 04:37:07.177070 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:37:07.177082 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:37:07.177094 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:37:07.177105 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-09 04:37:07.177116 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:37:07.177126 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-09 04:37:07.177136 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:37:07.177147 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-09 04:37:07.177158 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:37:07.177169 | orchestrator | 2026-02-09 04:37:07.177181 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-09 04:37:07.177203 | orchestrator | Monday 09 February 2026 04:36:59 +0000 (0:00:01.512) 0:01:24.040 ******* 2026-02-09 04:37:07.177215 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-09 04:37:07.177226 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:37:07.177237 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-09 04:37:07.177248 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:37:07.177259 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-09 04:37:07.177271 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:37:07.177282 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-09 04:37:07.177293 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:37:07.177304 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-09 04:37:07.177315 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:37:07.177325 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-09 04:37:07.177337 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-09 04:37:07.177349 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:37:07.177360 | orchestrator | 2026-02-09 04:37:07.177371 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-09 04:37:07.177382 | orchestrator | Monday 09 February 2026 04:37:01 +0000 (0:00:01.596) 0:01:25.636 ******* 2026-02-09 04:37:07.177393 | orchestrator | [WARNING]: Skipped 2026-02-09 04:37:07.177405 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-09 04:37:07.177416 | orchestrator | due to this access issue: 2026-02-09 04:37:07.177427 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-09 04:37:07.177438 | orchestrator | not a directory 2026-02-09 04:37:07.177449 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 04:37:07.177460 | orchestrator | 2026-02-09 04:37:07.177471 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-09 04:37:07.177490 | orchestrator | Monday 09 February 2026 04:37:02 +0000 (0:00:01.262) 0:01:26.899 ******* 2026-02-09 04:37:07.177501 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:37:07.177512 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:37:07.177523 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:37:07.177533 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:37:07.177543 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:37:07.177555 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:37:07.177566 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:37:07.177577 | orchestrator | 2026-02-09 04:37:07.177588 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-09 04:37:07.177600 | orchestrator | Monday 09 February 2026 04:37:03 +0000 (0:00:01.066) 0:01:27.966 ******* 2026-02-09 04:37:07.177612 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:37:07.177623 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:37:07.177634 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:37:07.177645 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:37:07.177656 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:37:07.177668 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:37:07.177679 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:37:07.177685 | orchestrator | 2026-02-09 04:37:07.177692 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-09 04:37:07.177699 | orchestrator | Monday 09 February 2026 04:37:04 +0000 (0:00:00.983) 0:01:28.950 ******* 2026-02-09 04:37:07.177722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:37:08.844806 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-09 04:37:08.844936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:37:08.844958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:37:08.844971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:37:08.845001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:37:08.845014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:37:08.845055 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:37:08.845115 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-09 04:37:08.845137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:37:08.845158 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:37:08.845178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:37:08.845197 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:37:08.845226 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:37:08.845247 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:37:08.845321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:37:10.891335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:37:10.891439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:37:10.891457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:37:10.891469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:37:10.891481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-09 04:37:10.891512 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-09 04:37:10.891564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:37:10.891579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:37:10.891591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-09 04:37:10.891607 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:37:10.891627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:37:10.891668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:37:10.891700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 04:37:10.891721 | orchestrator | 2026-02-09 04:37:10.891743 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-09 04:37:10.891766 | orchestrator | Monday 09 February 2026 04:37:08 +0000 (0:00:04.173) 0:01:33.123 ******* 2026-02-09 04:37:10.891778 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-09 04:37:10.891790 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:37:10.891801 | orchestrator | 2026-02-09 04:37:10.891812 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-09 04:37:10.891823 | orchestrator | Monday 09 February 2026 04:37:10 +0000 (0:00:01.298) 0:01:34.421 ******* 2026-02-09 04:37:10.891834 | orchestrator | 2026-02-09 04:37:10.891845 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-09 04:37:10.891855 | orchestrator | Monday 09 February 2026 04:37:10 +0000 (0:00:00.263) 0:01:34.685 ******* 2026-02-09 04:37:10.891866 | orchestrator | 2026-02-09 04:37:10.891877 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-09 04:37:10.891887 | orchestrator | Monday 09 February 2026 04:37:10 +0000 (0:00:00.074) 0:01:34.759 ******* 2026-02-09 04:37:10.891898 | orchestrator | 2026-02-09 04:37:10.891909 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-09 04:37:10.891930 | orchestrator | Monday 09 February 2026 04:37:10 +0000 (0:00:00.076) 0:01:34.836 ******* 2026-02-09 04:38:58.642823 | orchestrator | 2026-02-09 04:38:58.642973 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-09 04:38:58.642992 | orchestrator | Monday 09 February 2026 04:37:10 +0000 (0:00:00.070) 0:01:34.906 ******* 2026-02-09 04:38:58.643005 | orchestrator | 2026-02-09 04:38:58.643017 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-09 04:38:58.643028 | orchestrator | Monday 09 February 2026 04:37:10 +0000 (0:00:00.068) 0:01:34.975 ******* 2026-02-09 04:38:58.643039 | orchestrator | 2026-02-09 04:38:58.643050 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-09 04:38:58.643061 | orchestrator | Monday 09 February 2026 04:37:10 +0000 (0:00:00.082) 0:01:35.057 ******* 2026-02-09 04:38:58.643071 | orchestrator | 2026-02-09 04:38:58.643082 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-09 04:38:58.643093 | orchestrator | Monday 09 February 2026 04:37:10 +0000 (0:00:00.096) 0:01:35.153 ******* 2026-02-09 04:38:58.643104 | orchestrator | changed: [testbed-manager] 2026-02-09 04:38:58.643116 | orchestrator | 2026-02-09 04:38:58.643127 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-09 04:38:58.643138 | orchestrator | Monday 09 February 2026 04:37:37 +0000 (0:00:26.970) 0:02:02.124 ******* 2026-02-09 04:38:58.643149 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:38:58.643160 | orchestrator | changed: [testbed-manager] 2026-02-09 04:38:58.643171 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:38:58.643182 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:38:58.643193 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:38:58.643204 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:38:58.643214 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:38:58.643226 | orchestrator | 2026-02-09 04:38:58.643237 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-09 04:38:58.643249 | orchestrator | Monday 09 February 2026 04:37:51 +0000 (0:00:14.072) 0:02:16.197 ******* 2026-02-09 04:38:58.643262 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:38:58.643274 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:38:58.643312 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:38:58.643325 | orchestrator | 2026-02-09 04:38:58.643338 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-09 04:38:58.643364 | orchestrator | Monday 09 February 2026 04:37:57 +0000 (0:00:05.767) 0:02:21.965 ******* 2026-02-09 04:38:58.643377 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:38:58.643390 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:38:58.643402 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:38:58.643414 | orchestrator | 2026-02-09 04:38:58.643428 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-09 04:38:58.643441 | orchestrator | Monday 09 February 2026 04:38:08 +0000 (0:00:10.666) 0:02:32.631 ******* 2026-02-09 04:38:58.643483 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:38:58.643495 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:38:58.643508 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:38:58.643520 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:38:58.643532 | orchestrator | changed: [testbed-manager] 2026-02-09 04:38:58.643544 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:38:58.643557 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:38:58.643568 | orchestrator | 2026-02-09 04:38:58.643581 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-09 04:38:58.643593 | orchestrator | Monday 09 February 2026 04:38:23 +0000 (0:00:14.721) 0:02:47.353 ******* 2026-02-09 04:38:58.643605 | orchestrator | changed: [testbed-manager] 2026-02-09 04:38:58.643618 | orchestrator | 2026-02-09 04:38:58.643631 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-09 04:38:58.643642 | orchestrator | Monday 09 February 2026 04:38:31 +0000 (0:00:08.643) 0:02:55.997 ******* 2026-02-09 04:38:58.643653 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:38:58.643679 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:38:58.643691 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:38:58.643701 | orchestrator | 2026-02-09 04:38:58.643712 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-09 04:38:58.643723 | orchestrator | Monday 09 February 2026 04:38:42 +0000 (0:00:10.394) 0:03:06.391 ******* 2026-02-09 04:38:58.643733 | orchestrator | changed: [testbed-manager] 2026-02-09 04:38:58.643744 | orchestrator | 2026-02-09 04:38:58.643755 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-09 04:38:58.643766 | orchestrator | Monday 09 February 2026 04:38:47 +0000 (0:00:05.791) 0:03:12.182 ******* 2026-02-09 04:38:58.643777 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:38:58.643787 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:38:58.643798 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:38:58.643809 | orchestrator | 2026-02-09 04:38:58.643819 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:38:58.643831 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-09 04:38:58.643844 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-09 04:38:58.643854 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-09 04:38:58.643865 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-09 04:38:58.643876 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-09 04:38:58.643904 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-09 04:38:58.643927 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-09 04:38:58.643938 | orchestrator | 2026-02-09 04:38:58.643949 | orchestrator | 2026-02-09 04:38:58.643960 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:38:58.643971 | orchestrator | Monday 09 February 2026 04:38:58 +0000 (0:00:10.137) 0:03:22.320 ******* 2026-02-09 04:38:58.643982 | orchestrator | =============================================================================== 2026-02-09 04:38:58.643993 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.19s 2026-02-09 04:38:58.644003 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 26.97s 2026-02-09 04:38:58.644014 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.80s 2026-02-09 04:38:58.644025 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.72s 2026-02-09 04:38:58.644036 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.07s 2026-02-09 04:38:58.644046 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.67s 2026-02-09 04:38:58.644057 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.39s 2026-02-09 04:38:58.644068 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.14s 2026-02-09 04:38:58.644079 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.64s 2026-02-09 04:38:58.644090 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.19s 2026-02-09 04:38:58.644100 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.79s 2026-02-09 04:38:58.644111 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.77s 2026-02-09 04:38:58.644122 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.30s 2026-02-09 04:38:58.644133 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.17s 2026-02-09 04:38:58.644144 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.31s 2026-02-09 04:38:58.644154 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.95s 2026-02-09 04:38:58.644165 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.22s 2026-02-09 04:38:58.644176 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.21s 2026-02-09 04:38:58.644186 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.98s 2026-02-09 04:38:58.644197 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 1.88s 2026-02-09 04:39:01.284286 | orchestrator | 2026-02-09 04:39:01 | INFO  | Task cdde3689-9f98-4d9c-9547-6484e5bcba3a (grafana) was prepared for execution. 2026-02-09 04:39:01.284386 | orchestrator | 2026-02-09 04:39:01 | INFO  | It takes a moment until task cdde3689-9f98-4d9c-9547-6484e5bcba3a (grafana) has been started and output is visible here. 2026-02-09 04:39:11.874823 | orchestrator | 2026-02-09 04:39:11.874937 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:39:11.874954 | orchestrator | 2026-02-09 04:39:11.874982 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:39:11.874995 | orchestrator | Monday 09 February 2026 04:39:06 +0000 (0:00:00.288) 0:00:00.288 ******* 2026-02-09 04:39:11.875006 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:39:11.875019 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:39:11.875030 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:39:11.875041 | orchestrator | 2026-02-09 04:39:11.875052 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:39:11.875064 | orchestrator | Monday 09 February 2026 04:39:06 +0000 (0:00:00.349) 0:00:00.638 ******* 2026-02-09 04:39:11.875074 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-09 04:39:11.875086 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-09 04:39:11.875119 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-09 04:39:11.875170 | orchestrator | 2026-02-09 04:39:11.875182 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-09 04:39:11.875193 | orchestrator | 2026-02-09 04:39:11.875204 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-09 04:39:11.875215 | orchestrator | Monday 09 February 2026 04:39:06 +0000 (0:00:00.490) 0:00:01.129 ******* 2026-02-09 04:39:11.875226 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:39:11.875237 | orchestrator | 2026-02-09 04:39:11.875248 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-09 04:39:11.875259 | orchestrator | Monday 09 February 2026 04:39:07 +0000 (0:00:00.626) 0:00:01.756 ******* 2026-02-09 04:39:11.875273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:11.875288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:11.875300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:11.875311 | orchestrator | 2026-02-09 04:39:11.875323 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-09 04:39:11.875334 | orchestrator | Monday 09 February 2026 04:39:08 +0000 (0:00:00.969) 0:00:02.725 ******* 2026-02-09 04:39:11.875345 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-09 04:39:11.875359 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-09 04:39:11.875372 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:39:11.875386 | orchestrator | 2026-02-09 04:39:11.875399 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-09 04:39:11.875413 | orchestrator | Monday 09 February 2026 04:39:09 +0000 (0:00:00.889) 0:00:03.614 ******* 2026-02-09 04:39:11.875425 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:39:11.875438 | orchestrator | 2026-02-09 04:39:11.875451 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-09 04:39:11.875472 | orchestrator | Monday 09 February 2026 04:39:09 +0000 (0:00:00.586) 0:00:04.201 ******* 2026-02-09 04:39:11.875541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:11.875556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:11.875570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:11.875584 | orchestrator | 2026-02-09 04:39:11.875596 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-09 04:39:11.875609 | orchestrator | Monday 09 February 2026 04:39:11 +0000 (0:00:01.311) 0:00:05.513 ******* 2026-02-09 04:39:11.875622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-09 04:39:11.875635 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:39:11.875649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-09 04:39:11.875662 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:39:11.875752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-09 04:39:19.079987 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:39:19.080097 | orchestrator | 2026-02-09 04:39:19.080115 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-09 04:39:19.080129 | orchestrator | Monday 09 February 2026 04:39:11 +0000 (0:00:00.608) 0:00:06.121 ******* 2026-02-09 04:39:19.080143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-09 04:39:19.080159 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:39:19.080172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-09 04:39:19.080183 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:39:19.080195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-09 04:39:19.080207 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:39:19.080218 | orchestrator | 2026-02-09 04:39:19.080229 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-09 04:39:19.080240 | orchestrator | Monday 09 February 2026 04:39:12 +0000 (0:00:00.664) 0:00:06.786 ******* 2026-02-09 04:39:19.080252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:19.080288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:19.080334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:19.080347 | orchestrator | 2026-02-09 04:39:19.080358 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-09 04:39:19.080369 | orchestrator | Monday 09 February 2026 04:39:13 +0000 (0:00:01.350) 0:00:08.137 ******* 2026-02-09 04:39:19.080380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:19.080392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:19.080404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:39:19.080424 | orchestrator | 2026-02-09 04:39:19.080435 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-09 04:39:19.080446 | orchestrator | Monday 09 February 2026 04:39:15 +0000 (0:00:01.736) 0:00:09.873 ******* 2026-02-09 04:39:19.080457 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:39:19.080468 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:39:19.080479 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:39:19.080526 | orchestrator | 2026-02-09 04:39:19.080542 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-09 04:39:19.080555 | orchestrator | Monday 09 February 2026 04:39:15 +0000 (0:00:00.351) 0:00:10.224 ******* 2026-02-09 04:39:19.080567 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-09 04:39:19.080582 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-09 04:39:19.080596 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-09 04:39:19.080609 | orchestrator | 2026-02-09 04:39:19.080622 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-09 04:39:19.080635 | orchestrator | Monday 09 February 2026 04:39:17 +0000 (0:00:01.277) 0:00:11.503 ******* 2026-02-09 04:39:19.080649 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-09 04:39:19.080662 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-09 04:39:19.080675 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-09 04:39:19.080688 | orchestrator | 2026-02-09 04:39:19.080707 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-09 04:39:19.080730 | orchestrator | Monday 09 February 2026 04:39:19 +0000 (0:00:01.815) 0:00:13.318 ******* 2026-02-09 04:39:25.684319 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:39:25.684422 | orchestrator | 2026-02-09 04:39:25.684437 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-09 04:39:25.684449 | orchestrator | Monday 09 February 2026 04:39:19 +0000 (0:00:00.811) 0:00:14.129 ******* 2026-02-09 04:39:25.684459 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-09 04:39:25.684470 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-09 04:39:25.684480 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:39:25.684490 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:39:25.684545 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:39:25.684556 | orchestrator | 2026-02-09 04:39:25.684566 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-09 04:39:25.684576 | orchestrator | Monday 09 February 2026 04:39:20 +0000 (0:00:00.792) 0:00:14.922 ******* 2026-02-09 04:39:25.684586 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:39:25.684596 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:39:25.684606 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:39:25.684615 | orchestrator | 2026-02-09 04:39:25.684625 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-09 04:39:25.684635 | orchestrator | Monday 09 February 2026 04:39:21 +0000 (0:00:00.393) 0:00:15.315 ******* 2026-02-09 04:39:25.684648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1087062, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.402706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:25.684687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1087062, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.402706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:25.684698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1087062, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.402706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:25.684709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1087128, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4177902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:25.684803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1087128, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4177902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:25.684817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1087128, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4177902, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:25.684841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1087074, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4042664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:25.684872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1087074, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4042664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:25.684884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1087074, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4042664, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:25.684896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1087130, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4195886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:25.684907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1087130, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4195886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:25.684933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1087130, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4195886, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1087096, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4107604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1087096, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4107604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1087096, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4107604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1087117, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4159682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1087117, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4159682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1087117, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4159682, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1087060, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4010832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1087060, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4010832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1087060, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4010832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1087069, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.402706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1087069, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.402706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1087069, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.402706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:29.290365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1087075, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4049966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1087075, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4049966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1087075, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4049966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1087101, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.412848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1087101, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.412848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1087101, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.412848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1087124, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4170492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1087124, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4170492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1087124, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4170492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1087071, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4029965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1087071, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4029965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1087071, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4029965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1087112, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4148014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:33.422778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1087112, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4148014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1087112, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4148014, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1087098, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4119966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1087098, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4119966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1087098, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4119966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1087081, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4107604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1087081, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4107604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1087081, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4107604, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1087079, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4059966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1087079, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4059966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1087079, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4059966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1087105, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4138422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1087105, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4138422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:37.303946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1087105, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4138422, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1087076, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.405686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1087076, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.405686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1087076, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.405686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1087122, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4159968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1087122, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4159968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1087122, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4159968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1087468, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8050013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1087468, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8050013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1087468, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8050013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1087205, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7394652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1087205, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7394652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1087205, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7394652, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:41.172434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1087187, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4289968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1087187, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4289968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1087187, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4289968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1087302, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7468657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1087302, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7468657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1087302, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7468657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1087143, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4217002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1087143, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4217002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1087143, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4217002, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1087411, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7720008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1087411, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7720008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1087411, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7720008, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1087340, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7640388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:45.179703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1087340, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7640388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.080689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1087340, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7640388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.080810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1087413, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.773001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.080827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1087413, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.773001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.080929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1087413, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.773001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.080950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1087461, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.801836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.080963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1087461, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.801836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.080997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1087461, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.801836, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.081010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1087393, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.770001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.081022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1087393, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.770001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.081049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1087393, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.770001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.081061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1087286, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7417817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.081072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1087286, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7417817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:49.081092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1087286, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7417817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1087200, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.432038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1087200, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.432038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1087200, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.432038, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1087278, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7400005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1087278, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7400005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1087278, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7400005, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1087191, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4311872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1087191, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4311872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1087191, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4311872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1087294, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7422297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1087294, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7422297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1087294, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7422297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:52.828644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1087446, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8000011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.602914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1087446, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8000011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1087446, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.8000011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1087434, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7950013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1087434, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7950013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1087434, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.7950013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1087149, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4219968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1087149, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4219968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1087149, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4219968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1087152, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4289968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1087152, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4289968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1087152, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.4289968, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1087381, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.766684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:39:56.603241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1087381, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.766684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:41:31.049909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1087381, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.766684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:41:31.050141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1087419, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.776001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:41:31.050177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1087419, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.776001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:41:31.050191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1087419, 'dev': 108, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770604792.776001, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-09 04:41:31.050203 | orchestrator | 2026-02-09 04:41:31.050218 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-09 04:41:31.050240 | orchestrator | Monday 09 February 2026 04:39:57 +0000 (0:00:36.862) 0:00:52.177 ******* 2026-02-09 04:41:31.050256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:41:31.050314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:41:31.050377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-09 04:41:31.050392 | orchestrator | 2026-02-09 04:41:31.050405 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-09 04:41:31.050419 | orchestrator | Monday 09 February 2026 04:39:58 +0000 (0:00:00.976) 0:00:53.154 ******* 2026-02-09 04:41:31.050433 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:41:31.050447 | orchestrator | 2026-02-09 04:41:31.050460 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-09 04:41:31.050473 | orchestrator | Monday 09 February 2026 04:40:01 +0000 (0:00:02.241) 0:00:55.396 ******* 2026-02-09 04:41:31.050485 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:41:31.050498 | orchestrator | 2026-02-09 04:41:31.050512 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-09 04:41:31.050531 | orchestrator | Monday 09 February 2026 04:40:03 +0000 (0:00:02.112) 0:00:57.508 ******* 2026-02-09 04:41:31.050545 | orchestrator | 2026-02-09 04:41:31.050558 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-09 04:41:31.050571 | orchestrator | Monday 09 February 2026 04:40:03 +0000 (0:00:00.078) 0:00:57.587 ******* 2026-02-09 04:41:31.050584 | orchestrator | 2026-02-09 04:41:31.050597 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-09 04:41:31.050608 | orchestrator | Monday 09 February 2026 04:40:03 +0000 (0:00:00.073) 0:00:57.661 ******* 2026-02-09 04:41:31.050618 | orchestrator | 2026-02-09 04:41:31.050629 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-09 04:41:31.050639 | orchestrator | Monday 09 February 2026 04:40:03 +0000 (0:00:00.081) 0:00:57.742 ******* 2026-02-09 04:41:31.050650 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:41:31.050662 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:41:31.050673 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:41:31.050684 | orchestrator | 2026-02-09 04:41:31.050695 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-09 04:41:31.050722 | orchestrator | Monday 09 February 2026 04:40:10 +0000 (0:00:07.143) 0:01:04.886 ******* 2026-02-09 04:41:31.050781 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:41:31.050793 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:41:31.050804 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-09 04:41:31.050816 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-09 04:41:31.050827 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-09 04:41:31.050851 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:41:31.050864 | orchestrator | 2026-02-09 04:41:31.050875 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-09 04:41:31.050885 | orchestrator | Monday 09 February 2026 04:40:48 +0000 (0:00:37.922) 0:01:42.809 ******* 2026-02-09 04:41:31.050896 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:41:31.050907 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:41:31.050917 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:41:31.050928 | orchestrator | 2026-02-09 04:41:31.050950 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-09 04:41:31.050962 | orchestrator | Monday 09 February 2026 04:41:26 +0000 (0:00:37.570) 0:02:20.379 ******* 2026-02-09 04:41:31.050973 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:41:31.050983 | orchestrator | 2026-02-09 04:41:31.050994 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-09 04:41:31.051005 | orchestrator | Monday 09 February 2026 04:41:28 +0000 (0:00:02.067) 0:02:22.446 ******* 2026-02-09 04:41:31.051015 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:41:31.051026 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:41:31.051040 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:41:31.051058 | orchestrator | 2026-02-09 04:41:31.051069 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-09 04:41:31.051080 | orchestrator | Monday 09 February 2026 04:41:28 +0000 (0:00:00.359) 0:02:22.806 ******* 2026-02-09 04:41:31.051096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-09 04:41:31.051124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-09 04:41:31.777631 | orchestrator | 2026-02-09 04:41:31.875442 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-09 04:41:31.875551 | orchestrator | Monday 09 February 2026 04:41:31 +0000 (0:00:02.475) 0:02:25.282 ******* 2026-02-09 04:41:31.875578 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:41:31.875600 | orchestrator | 2026-02-09 04:41:31.875620 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:41:31.875673 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 04:41:31.875687 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 04:41:31.875699 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-09 04:41:31.875710 | orchestrator | 2026-02-09 04:41:31.875721 | orchestrator | 2026-02-09 04:41:31.875732 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:41:31.875790 | orchestrator | Monday 09 February 2026 04:41:31 +0000 (0:00:00.333) 0:02:25.615 ******* 2026-02-09 04:41:31.875801 | orchestrator | =============================================================================== 2026-02-09 04:41:31.875812 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 37.92s 2026-02-09 04:41:31.875823 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 37.57s 2026-02-09 04:41:31.875855 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.86s 2026-02-09 04:41:31.875867 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.14s 2026-02-09 04:41:31.875910 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.48s 2026-02-09 04:41:31.875929 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.24s 2026-02-09 04:41:31.875948 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.11s 2026-02-09 04:41:31.875966 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.07s 2026-02-09 04:41:31.875984 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.82s 2026-02-09 04:41:31.876001 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.74s 2026-02-09 04:41:31.876020 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.35s 2026-02-09 04:41:31.876039 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.31s 2026-02-09 04:41:31.876057 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.28s 2026-02-09 04:41:31.876075 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.98s 2026-02-09 04:41:31.876093 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.97s 2026-02-09 04:41:31.876112 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.89s 2026-02-09 04:41:31.876131 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.81s 2026-02-09 04:41:31.876150 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.79s 2026-02-09 04:41:31.876169 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.66s 2026-02-09 04:41:31.876188 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.63s 2026-02-09 04:41:32.188828 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-09 04:41:32.194916 | orchestrator | + set -e 2026-02-09 04:41:32.194975 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 04:41:32.194985 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 04:41:32.194992 | orchestrator | ++ INTERACTIVE=false 2026-02-09 04:41:32.194999 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 04:41:32.195005 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 04:41:32.195011 | orchestrator | + source /opt/manager-vars.sh 2026-02-09 04:41:32.195018 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-09 04:41:32.195024 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-09 04:41:32.195030 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-09 04:41:32.195036 | orchestrator | ++ CEPH_VERSION=reef 2026-02-09 04:41:32.195042 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-09 04:41:32.195048 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-09 04:41:32.195054 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 04:41:32.195061 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 04:41:32.195067 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-09 04:41:32.195073 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-09 04:41:32.195080 | orchestrator | ++ export ARA=false 2026-02-09 04:41:32.195086 | orchestrator | ++ ARA=false 2026-02-09 04:41:32.195093 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-09 04:41:32.195099 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-09 04:41:32.195105 | orchestrator | ++ export TEMPEST=false 2026-02-09 04:41:32.195111 | orchestrator | ++ TEMPEST=false 2026-02-09 04:41:32.195117 | orchestrator | ++ export IS_ZUUL=true 2026-02-09 04:41:32.195123 | orchestrator | ++ IS_ZUUL=true 2026-02-09 04:41:32.195129 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 04:41:32.195135 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 04:41:32.195141 | orchestrator | ++ export EXTERNAL_API=false 2026-02-09 04:41:32.195147 | orchestrator | ++ EXTERNAL_API=false 2026-02-09 04:41:32.195153 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-09 04:41:32.195159 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-09 04:41:32.195165 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-09 04:41:32.195171 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-09 04:41:32.195177 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-09 04:41:32.195183 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-09 04:41:32.195795 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-09 04:41:32.266493 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-09 04:41:32.266588 | orchestrator | + osism apply clusterapi 2026-02-09 04:41:34.533908 | orchestrator | 2026-02-09 04:41:34 | INFO  | Task 20f1d5e6-95cd-40d9-b5d8-7f27aa4f58b5 (clusterapi) was prepared for execution. 2026-02-09 04:41:34.534096 | orchestrator | 2026-02-09 04:41:34 | INFO  | It takes a moment until task 20f1d5e6-95cd-40d9-b5d8-7f27aa4f58b5 (clusterapi) has been started and output is visible here. 2026-02-09 04:42:30.906646 | orchestrator | 2026-02-09 04:42:30.906774 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-09 04:42:30.906788 | orchestrator | 2026-02-09 04:42:30.906798 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-09 04:42:30.906806 | orchestrator | Monday 09 February 2026 04:41:39 +0000 (0:00:00.202) 0:00:00.202 ******* 2026-02-09 04:42:30.906815 | orchestrator | included: cert_manager for testbed-manager 2026-02-09 04:42:30.906823 | orchestrator | 2026-02-09 04:42:30.906831 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-09 04:42:30.906839 | orchestrator | Monday 09 February 2026 04:41:39 +0000 (0:00:00.262) 0:00:00.465 ******* 2026-02-09 04:42:30.906847 | orchestrator | changed: [testbed-manager] 2026-02-09 04:42:30.906856 | orchestrator | 2026-02-09 04:42:30.906864 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-09 04:42:30.906872 | orchestrator | Monday 09 February 2026 04:41:45 +0000 (0:00:05.554) 0:00:06.020 ******* 2026-02-09 04:42:30.906880 | orchestrator | changed: [testbed-manager] 2026-02-09 04:42:30.906887 | orchestrator | 2026-02-09 04:42:30.906895 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-09 04:42:30.906903 | orchestrator | 2026-02-09 04:42:30.906911 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-09 04:42:30.906919 | orchestrator | Monday 09 February 2026 04:42:09 +0000 (0:00:24.261) 0:00:30.281 ******* 2026-02-09 04:42:30.906926 | orchestrator | ok: [testbed-manager] 2026-02-09 04:42:30.906934 | orchestrator | 2026-02-09 04:42:30.906942 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-09 04:42:30.906950 | orchestrator | Monday 09 February 2026 04:42:10 +0000 (0:00:01.204) 0:00:31.486 ******* 2026-02-09 04:42:30.906958 | orchestrator | ok: [testbed-manager] 2026-02-09 04:42:30.906966 | orchestrator | 2026-02-09 04:42:30.906973 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-09 04:42:30.907019 | orchestrator | Monday 09 February 2026 04:42:10 +0000 (0:00:00.166) 0:00:31.652 ******* 2026-02-09 04:42:30.907034 | orchestrator | ok: [testbed-manager] 2026-02-09 04:42:30.907042 | orchestrator | 2026-02-09 04:42:30.907050 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-09 04:42:30.907058 | orchestrator | Monday 09 February 2026 04:42:27 +0000 (0:00:17.119) 0:00:48.772 ******* 2026-02-09 04:42:30.907066 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:42:30.907074 | orchestrator | 2026-02-09 04:42:30.907081 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-09 04:42:30.907089 | orchestrator | Monday 09 February 2026 04:42:28 +0000 (0:00:00.230) 0:00:49.002 ******* 2026-02-09 04:42:30.907097 | orchestrator | changed: [testbed-manager] 2026-02-09 04:42:30.907105 | orchestrator | 2026-02-09 04:42:30.907112 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:42:30.907121 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 04:42:30.907130 | orchestrator | 2026-02-09 04:42:30.907138 | orchestrator | 2026-02-09 04:42:30.907146 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:42:30.907154 | orchestrator | Monday 09 February 2026 04:42:30 +0000 (0:00:02.325) 0:00:51.328 ******* 2026-02-09 04:42:30.907161 | orchestrator | =============================================================================== 2026-02-09 04:42:30.907169 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 24.26s 2026-02-09 04:42:30.907177 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.12s 2026-02-09 04:42:30.907186 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.55s 2026-02-09 04:42:30.907216 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.33s 2026-02-09 04:42:30.907226 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.20s 2026-02-09 04:42:30.907235 | orchestrator | Include cert_manager role ----------------------------------------------- 0.26s 2026-02-09 04:42:30.907244 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.23s 2026-02-09 04:42:30.907254 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.17s 2026-02-09 04:42:31.294287 | orchestrator | + osism apply magnum 2026-02-09 04:42:33.464584 | orchestrator | 2026-02-09 04:42:33 | INFO  | Task 51b27f4b-1ede-4e9b-b6bf-0ccf92ae0667 (magnum) was prepared for execution. 2026-02-09 04:42:33.464678 | orchestrator | 2026-02-09 04:42:33 | INFO  | It takes a moment until task 51b27f4b-1ede-4e9b-b6bf-0ccf92ae0667 (magnum) has been started and output is visible here. 2026-02-09 04:43:14.800556 | orchestrator | 2026-02-09 04:43:14.800682 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:43:14.800704 | orchestrator | 2026-02-09 04:43:14.800716 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:43:14.800728 | orchestrator | Monday 09 February 2026 04:42:38 +0000 (0:00:00.320) 0:00:00.320 ******* 2026-02-09 04:43:14.800739 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:43:14.800761 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:43:14.800781 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:43:14.800800 | orchestrator | 2026-02-09 04:43:14.800817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:43:14.800829 | orchestrator | Monday 09 February 2026 04:42:38 +0000 (0:00:00.348) 0:00:00.668 ******* 2026-02-09 04:43:14.800840 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-09 04:43:14.800851 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-09 04:43:14.800862 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-09 04:43:14.800873 | orchestrator | 2026-02-09 04:43:14.800884 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-09 04:43:14.800895 | orchestrator | 2026-02-09 04:43:14.800906 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-09 04:43:14.800917 | orchestrator | Monday 09 February 2026 04:42:38 +0000 (0:00:00.482) 0:00:01.151 ******* 2026-02-09 04:43:14.800928 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:43:14.800939 | orchestrator | 2026-02-09 04:43:14.800950 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-09 04:43:14.800961 | orchestrator | Monday 09 February 2026 04:42:39 +0000 (0:00:00.656) 0:00:01.807 ******* 2026-02-09 04:43:14.800973 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-09 04:43:14.800984 | orchestrator | 2026-02-09 04:43:14.800995 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-09 04:43:14.801005 | orchestrator | Monday 09 February 2026 04:42:43 +0000 (0:00:03.536) 0:00:05.344 ******* 2026-02-09 04:43:14.801016 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-09 04:43:14.801027 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-09 04:43:14.801038 | orchestrator | 2026-02-09 04:43:14.801049 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-09 04:43:14.801060 | orchestrator | Monday 09 February 2026 04:42:49 +0000 (0:00:06.149) 0:00:11.493 ******* 2026-02-09 04:43:14.801071 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-09 04:43:14.801081 | orchestrator | 2026-02-09 04:43:14.801092 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-09 04:43:14.801103 | orchestrator | Monday 09 February 2026 04:42:52 +0000 (0:00:03.291) 0:00:14.785 ******* 2026-02-09 04:43:14.801114 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-09 04:43:14.801191 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-09 04:43:14.801205 | orchestrator | 2026-02-09 04:43:14.801231 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-09 04:43:14.801242 | orchestrator | Monday 09 February 2026 04:42:56 +0000 (0:00:03.655) 0:00:18.441 ******* 2026-02-09 04:43:14.801253 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-09 04:43:14.801264 | orchestrator | 2026-02-09 04:43:14.801274 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-09 04:43:14.801285 | orchestrator | Monday 09 February 2026 04:42:59 +0000 (0:00:03.065) 0:00:21.506 ******* 2026-02-09 04:43:14.801296 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-09 04:43:14.801307 | orchestrator | 2026-02-09 04:43:14.801318 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-09 04:43:14.801328 | orchestrator | Monday 09 February 2026 04:43:02 +0000 (0:00:03.633) 0:00:25.140 ******* 2026-02-09 04:43:14.801339 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:43:14.801350 | orchestrator | 2026-02-09 04:43:14.801361 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-09 04:43:14.801372 | orchestrator | Monday 09 February 2026 04:43:06 +0000 (0:00:03.147) 0:00:28.288 ******* 2026-02-09 04:43:14.801383 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:43:14.801393 | orchestrator | 2026-02-09 04:43:14.801404 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-09 04:43:14.801415 | orchestrator | Monday 09 February 2026 04:43:09 +0000 (0:00:03.774) 0:00:32.063 ******* 2026-02-09 04:43:14.801426 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:43:14.801437 | orchestrator | 2026-02-09 04:43:14.801447 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-09 04:43:14.801458 | orchestrator | Monday 09 February 2026 04:43:13 +0000 (0:00:03.237) 0:00:35.300 ******* 2026-02-09 04:43:14.801493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:14.801509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:14.801521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:14.801548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:14.801562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:14.801580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:22.449540 | orchestrator | 2026-02-09 04:43:22.449620 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-09 04:43:22.449629 | orchestrator | Monday 09 February 2026 04:43:14 +0000 (0:00:01.689) 0:00:36.990 ******* 2026-02-09 04:43:22.449635 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:43:22.449643 | orchestrator | 2026-02-09 04:43:22.449648 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-09 04:43:22.449654 | orchestrator | Monday 09 February 2026 04:43:14 +0000 (0:00:00.163) 0:00:37.153 ******* 2026-02-09 04:43:22.449659 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:43:22.449664 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:43:22.449669 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:43:22.449674 | orchestrator | 2026-02-09 04:43:22.449679 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-09 04:43:22.449684 | orchestrator | Monday 09 February 2026 04:43:15 +0000 (0:00:00.322) 0:00:37.476 ******* 2026-02-09 04:43:22.449706 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 04:43:22.449712 | orchestrator | 2026-02-09 04:43:22.449717 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-09 04:43:22.449722 | orchestrator | Monday 09 February 2026 04:43:16 +0000 (0:00:00.895) 0:00:38.371 ******* 2026-02-09 04:43:22.449729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:22.449748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:22.449755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:22.449772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:22.449779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:22.449789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:22.449795 | orchestrator | 2026-02-09 04:43:22.449800 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-09 04:43:22.449806 | orchestrator | Monday 09 February 2026 04:43:18 +0000 (0:00:02.448) 0:00:40.820 ******* 2026-02-09 04:43:22.449811 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:43:22.449817 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:43:22.449825 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:43:22.449830 | orchestrator | 2026-02-09 04:43:22.449835 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-09 04:43:22.449840 | orchestrator | Monday 09 February 2026 04:43:19 +0000 (0:00:00.558) 0:00:41.379 ******* 2026-02-09 04:43:22.449846 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:43:22.449851 | orchestrator | 2026-02-09 04:43:22.449856 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-09 04:43:22.449861 | orchestrator | Monday 09 February 2026 04:43:19 +0000 (0:00:00.651) 0:00:42.030 ******* 2026-02-09 04:43:22.449867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:22.449877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:23.439236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:23.439421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:23.439458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:23.439471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:23.439483 | orchestrator | 2026-02-09 04:43:23.439496 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-09 04:43:23.439508 | orchestrator | Monday 09 February 2026 04:43:22 +0000 (0:00:02.620) 0:00:44.651 ******* 2026-02-09 04:43:23.439538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 04:43:23.439574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:43:23.439587 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:43:23.439605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 04:43:23.439618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:43:23.439629 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:43:23.439641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 04:43:23.439669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:43:26.969451 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:43:26.969597 | orchestrator | 2026-02-09 04:43:26.969619 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-09 04:43:26.969637 | orchestrator | Monday 09 February 2026 04:43:23 +0000 (0:00:00.981) 0:00:45.633 ******* 2026-02-09 04:43:26.969656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 04:43:26.969699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:43:26.969715 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:43:26.969730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 04:43:26.969745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:43:26.969792 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:43:26.969835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 04:43:26.969853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:43:26.969869 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:43:26.969885 | orchestrator | 2026-02-09 04:43:26.969901 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-09 04:43:26.969917 | orchestrator | Monday 09 February 2026 04:43:24 +0000 (0:00:00.910) 0:00:46.543 ******* 2026-02-09 04:43:26.969943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:26.969963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:26.970001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:33.519622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:33.519740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:33.519772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:33.519786 | orchestrator | 2026-02-09 04:43:33.519800 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-09 04:43:33.519812 | orchestrator | Monday 09 February 2026 04:43:26 +0000 (0:00:02.627) 0:00:49.170 ******* 2026-02-09 04:43:33.519842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:33.519873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:33.519885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:33.519902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:33.519913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:33.519932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:33.519944 | orchestrator | 2026-02-09 04:43:33.519955 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-09 04:43:33.519966 | orchestrator | Monday 09 February 2026 04:43:32 +0000 (0:00:05.787) 0:00:54.958 ******* 2026-02-09 04:43:33.519986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 04:43:35.415686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:43:35.415754 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:43:35.415775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 04:43:35.415793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:43:35.415797 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:43:35.415802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-09 04:43:35.415814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 04:43:35.415819 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:43:35.415823 | orchestrator | 2026-02-09 04:43:35.415827 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-09 04:43:35.415832 | orchestrator | Monday 09 February 2026 04:43:33 +0000 (0:00:00.764) 0:00:55.723 ******* 2026-02-09 04:43:35.415837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:35.415845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:35.415853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-09 04:43:35.415857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:43:35.415865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:44:31.690924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-09 04:44:31.692583 | orchestrator | 2026-02-09 04:44:31.693987 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-09 04:44:31.694111 | orchestrator | Monday 09 February 2026 04:43:35 +0000 (0:00:01.889) 0:00:57.612 ******* 2026-02-09 04:44:31.694126 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:44:31.694138 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:44:31.694148 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:44:31.694158 | orchestrator | 2026-02-09 04:44:31.694168 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-09 04:44:31.694178 | orchestrator | Monday 09 February 2026 04:43:36 +0000 (0:00:00.628) 0:00:58.240 ******* 2026-02-09 04:44:31.694188 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:44:31.694197 | orchestrator | 2026-02-09 04:44:31.694207 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-09 04:44:31.694216 | orchestrator | Monday 09 February 2026 04:43:38 +0000 (0:00:02.068) 0:01:00.308 ******* 2026-02-09 04:44:31.694226 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:44:31.694235 | orchestrator | 2026-02-09 04:44:31.694245 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-09 04:44:31.694254 | orchestrator | Monday 09 February 2026 04:43:40 +0000 (0:00:02.227) 0:01:02.536 ******* 2026-02-09 04:44:31.694264 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:44:31.694273 | orchestrator | 2026-02-09 04:44:31.694310 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-09 04:44:31.694326 | orchestrator | Monday 09 February 2026 04:43:56 +0000 (0:00:15.790) 0:01:18.326 ******* 2026-02-09 04:44:31.694341 | orchestrator | 2026-02-09 04:44:31.694357 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-09 04:44:31.694372 | orchestrator | Monday 09 February 2026 04:43:56 +0000 (0:00:00.082) 0:01:18.409 ******* 2026-02-09 04:44:31.694386 | orchestrator | 2026-02-09 04:44:31.694396 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-09 04:44:31.694406 | orchestrator | Monday 09 February 2026 04:43:56 +0000 (0:00:00.079) 0:01:18.488 ******* 2026-02-09 04:44:31.694415 | orchestrator | 2026-02-09 04:44:31.694425 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-09 04:44:31.694434 | orchestrator | Monday 09 February 2026 04:43:56 +0000 (0:00:00.073) 0:01:18.562 ******* 2026-02-09 04:44:31.694444 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:44:31.694453 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:44:31.694463 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:44:31.694472 | orchestrator | 2026-02-09 04:44:31.694482 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-09 04:44:31.694491 | orchestrator | Monday 09 February 2026 04:44:15 +0000 (0:00:19.509) 0:01:38.072 ******* 2026-02-09 04:44:31.694501 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:44:31.694510 | orchestrator | changed: [testbed-node-2] 2026-02-09 04:44:31.694520 | orchestrator | changed: [testbed-node-1] 2026-02-09 04:44:31.694529 | orchestrator | 2026-02-09 04:44:31.694538 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:44:31.694550 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 04:44:31.694562 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 04:44:31.694571 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-09 04:44:31.694581 | orchestrator | 2026-02-09 04:44:31.694590 | orchestrator | 2026-02-09 04:44:31.694600 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:44:31.694610 | orchestrator | Monday 09 February 2026 04:44:31 +0000 (0:00:15.307) 0:01:53.379 ******* 2026-02-09 04:44:31.694619 | orchestrator | =============================================================================== 2026-02-09 04:44:31.694638 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.51s 2026-02-09 04:44:31.694648 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.79s 2026-02-09 04:44:31.694658 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.31s 2026-02-09 04:44:31.694668 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.15s 2026-02-09 04:44:31.694677 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.79s 2026-02-09 04:44:31.694687 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.77s 2026-02-09 04:44:31.694696 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.66s 2026-02-09 04:44:31.694737 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.63s 2026-02-09 04:44:31.694748 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.54s 2026-02-09 04:44:31.694757 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.29s 2026-02-09 04:44:31.694767 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.24s 2026-02-09 04:44:31.694776 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.15s 2026-02-09 04:44:31.694786 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.07s 2026-02-09 04:44:31.694795 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.63s 2026-02-09 04:44:31.694805 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.62s 2026-02-09 04:44:31.694814 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.45s 2026-02-09 04:44:31.694823 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.23s 2026-02-09 04:44:31.694843 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.07s 2026-02-09 04:44:31.694853 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.89s 2026-02-09 04:44:31.694863 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.69s 2026-02-09 04:44:32.493111 | orchestrator | ok: Runtime: 1:41:37.963968 2026-02-09 04:44:32.728751 | 2026-02-09 04:44:32.728932 | TASK [Deploy in a nutshell] 2026-02-09 04:44:33.262329 | orchestrator | skipping: Conditional result was False 2026-02-09 04:44:33.281003 | 2026-02-09 04:44:33.281254 | TASK [Bootstrap services] 2026-02-09 04:44:33.999429 | orchestrator | 2026-02-09 04:44:33.999558 | orchestrator | # BOOTSTRAP 2026-02-09 04:44:33.999570 | orchestrator | 2026-02-09 04:44:33.999577 | orchestrator | + set -e 2026-02-09 04:44:33.999583 | orchestrator | + echo 2026-02-09 04:44:33.999590 | orchestrator | + echo '# BOOTSTRAP' 2026-02-09 04:44:33.999599 | orchestrator | + echo 2026-02-09 04:44:33.999624 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-09 04:44:34.006135 | orchestrator | + set -e 2026-02-09 04:44:34.006183 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-09 04:44:36.334976 | orchestrator | 2026-02-09 04:44:36 | INFO  | It takes a moment until task 1bb1a8b4-f728-4433-9737-08666d0b7b35 (flavor-manager) has been started and output is visible here. 2026-02-09 04:44:44.872119 | orchestrator | 2026-02-09 04:44:40 | INFO  | Flavor SCS-1L-1 created 2026-02-09 04:44:44.872244 | orchestrator | 2026-02-09 04:44:40 | INFO  | Flavor SCS-1L-1-5 created 2026-02-09 04:44:44.872260 | orchestrator | 2026-02-09 04:44:40 | INFO  | Flavor SCS-1V-2 created 2026-02-09 04:44:44.872270 | orchestrator | 2026-02-09 04:44:41 | INFO  | Flavor SCS-1V-2-5 created 2026-02-09 04:44:44.872280 | orchestrator | 2026-02-09 04:44:41 | INFO  | Flavor SCS-1V-4 created 2026-02-09 04:44:44.872289 | orchestrator | 2026-02-09 04:44:41 | INFO  | Flavor SCS-1V-4-10 created 2026-02-09 04:44:44.872298 | orchestrator | 2026-02-09 04:44:41 | INFO  | Flavor SCS-1V-8 created 2026-02-09 04:44:44.872336 | orchestrator | 2026-02-09 04:44:41 | INFO  | Flavor SCS-1V-8-20 created 2026-02-09 04:44:44.872360 | orchestrator | 2026-02-09 04:44:41 | INFO  | Flavor SCS-2V-4 created 2026-02-09 04:44:44.872370 | orchestrator | 2026-02-09 04:44:41 | INFO  | Flavor SCS-2V-4-10 created 2026-02-09 04:44:44.872379 | orchestrator | 2026-02-09 04:44:42 | INFO  | Flavor SCS-2V-8 created 2026-02-09 04:44:44.872388 | orchestrator | 2026-02-09 04:44:42 | INFO  | Flavor SCS-2V-8-20 created 2026-02-09 04:44:44.872397 | orchestrator | 2026-02-09 04:44:42 | INFO  | Flavor SCS-2V-16 created 2026-02-09 04:44:44.872406 | orchestrator | 2026-02-09 04:44:42 | INFO  | Flavor SCS-2V-16-50 created 2026-02-09 04:44:44.872415 | orchestrator | 2026-02-09 04:44:42 | INFO  | Flavor SCS-4V-8 created 2026-02-09 04:44:44.872424 | orchestrator | 2026-02-09 04:44:42 | INFO  | Flavor SCS-4V-8-20 created 2026-02-09 04:44:44.872433 | orchestrator | 2026-02-09 04:44:42 | INFO  | Flavor SCS-4V-16 created 2026-02-09 04:44:44.872441 | orchestrator | 2026-02-09 04:44:43 | INFO  | Flavor SCS-4V-16-50 created 2026-02-09 04:44:44.872450 | orchestrator | 2026-02-09 04:44:43 | INFO  | Flavor SCS-4V-32 created 2026-02-09 04:44:44.872459 | orchestrator | 2026-02-09 04:44:43 | INFO  | Flavor SCS-4V-32-100 created 2026-02-09 04:44:44.872468 | orchestrator | 2026-02-09 04:44:43 | INFO  | Flavor SCS-8V-16 created 2026-02-09 04:44:44.872477 | orchestrator | 2026-02-09 04:44:43 | INFO  | Flavor SCS-8V-16-50 created 2026-02-09 04:44:44.872486 | orchestrator | 2026-02-09 04:44:43 | INFO  | Flavor SCS-8V-32 created 2026-02-09 04:44:44.872495 | orchestrator | 2026-02-09 04:44:43 | INFO  | Flavor SCS-8V-32-100 created 2026-02-09 04:44:44.872503 | orchestrator | 2026-02-09 04:44:44 | INFO  | Flavor SCS-16V-32 created 2026-02-09 04:44:44.872512 | orchestrator | 2026-02-09 04:44:44 | INFO  | Flavor SCS-16V-32-100 created 2026-02-09 04:44:44.872521 | orchestrator | 2026-02-09 04:44:44 | INFO  | Flavor SCS-2V-4-20s created 2026-02-09 04:44:44.872530 | orchestrator | 2026-02-09 04:44:44 | INFO  | Flavor SCS-4V-8-50s created 2026-02-09 04:44:44.872539 | orchestrator | 2026-02-09 04:44:44 | INFO  | Flavor SCS-8V-32-100s created 2026-02-09 04:44:47.460847 | orchestrator | 2026-02-09 04:44:47 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-09 04:44:57.556208 | orchestrator | 2026-02-09 04:44:57 | INFO  | Task 08ae8c1e-4903-4435-8ae8-ec40cbf9951f (bootstrap-basic) was prepared for execution. 2026-02-09 04:44:57.556367 | orchestrator | 2026-02-09 04:44:57 | INFO  | It takes a moment until task 08ae8c1e-4903-4435-8ae8-ec40cbf9951f (bootstrap-basic) has been started and output is visible here. 2026-02-09 04:45:43.071109 | orchestrator | 2026-02-09 04:45:43.071228 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-09 04:45:43.071246 | orchestrator | 2026-02-09 04:45:43.071258 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-09 04:45:43.071269 | orchestrator | Monday 09 February 2026 04:45:02 +0000 (0:00:00.077) 0:00:00.077 ******* 2026-02-09 04:45:43.071280 | orchestrator | ok: [localhost] 2026-02-09 04:45:43.071292 | orchestrator | 2026-02-09 04:45:43.071304 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-09 04:45:43.071315 | orchestrator | Monday 09 February 2026 04:45:04 +0000 (0:00:02.059) 0:00:02.136 ******* 2026-02-09 04:45:43.071326 | orchestrator | ok: [localhost] 2026-02-09 04:45:43.071336 | orchestrator | 2026-02-09 04:45:43.071348 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-09 04:45:43.071359 | orchestrator | Monday 09 February 2026 04:45:11 +0000 (0:00:07.233) 0:00:09.370 ******* 2026-02-09 04:45:43.071370 | orchestrator | changed: [localhost] 2026-02-09 04:45:43.071381 | orchestrator | 2026-02-09 04:45:43.071393 | orchestrator | TASK [Create public network] *************************************************** 2026-02-09 04:45:43.071404 | orchestrator | Monday 09 February 2026 04:45:18 +0000 (0:00:06.878) 0:00:16.248 ******* 2026-02-09 04:45:43.071444 | orchestrator | changed: [localhost] 2026-02-09 04:45:43.071455 | orchestrator | 2026-02-09 04:45:43.071466 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-09 04:45:43.071477 | orchestrator | Monday 09 February 2026 04:45:23 +0000 (0:00:05.409) 0:00:21.658 ******* 2026-02-09 04:45:43.071493 | orchestrator | changed: [localhost] 2026-02-09 04:45:43.071504 | orchestrator | 2026-02-09 04:45:43.071515 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-09 04:45:43.071526 | orchestrator | Monday 09 February 2026 04:45:30 +0000 (0:00:06.768) 0:00:28.426 ******* 2026-02-09 04:45:43.071537 | orchestrator | changed: [localhost] 2026-02-09 04:45:43.071548 | orchestrator | 2026-02-09 04:45:43.071559 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-09 04:45:43.071570 | orchestrator | Monday 09 February 2026 04:45:34 +0000 (0:00:04.459) 0:00:32.886 ******* 2026-02-09 04:45:43.071581 | orchestrator | changed: [localhost] 2026-02-09 04:45:43.071592 | orchestrator | 2026-02-09 04:45:43.071603 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-09 04:45:43.071625 | orchestrator | Monday 09 February 2026 04:45:38 +0000 (0:00:03.944) 0:00:36.831 ******* 2026-02-09 04:45:43.071638 | orchestrator | ok: [localhost] 2026-02-09 04:45:43.071651 | orchestrator | 2026-02-09 04:45:43.071665 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:45:43.071679 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 04:45:43.071693 | orchestrator | 2026-02-09 04:45:43.071706 | orchestrator | 2026-02-09 04:45:43.071719 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:45:43.071732 | orchestrator | Monday 09 February 2026 04:45:42 +0000 (0:00:03.814) 0:00:40.645 ******* 2026-02-09 04:45:43.071746 | orchestrator | =============================================================================== 2026-02-09 04:45:43.071758 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.23s 2026-02-09 04:45:43.071769 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.88s 2026-02-09 04:45:43.071780 | orchestrator | Set public network to default ------------------------------------------- 6.77s 2026-02-09 04:45:43.071791 | orchestrator | Create public network --------------------------------------------------- 5.41s 2026-02-09 04:45:43.071826 | orchestrator | Create public subnet ---------------------------------------------------- 4.46s 2026-02-09 04:45:43.071838 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.94s 2026-02-09 04:45:43.071849 | orchestrator | Create manager role ----------------------------------------------------- 3.81s 2026-02-09 04:45:43.071860 | orchestrator | Gathering Facts --------------------------------------------------------- 2.06s 2026-02-09 04:45:45.742584 | orchestrator | 2026-02-09 04:45:45 | INFO  | It takes a moment until task dfc439ec-5e5e-4658-acab-b90d48e625ab (image-manager) has been started and output is visible here. 2026-02-09 04:46:27.937029 | orchestrator | 2026-02-09 04:45:48 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-09 04:46:27.937164 | orchestrator | 2026-02-09 04:45:48 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-09 04:46:27.937187 | orchestrator | 2026-02-09 04:45:48 | INFO  | Importing image Cirros 0.6.2 2026-02-09 04:46:27.937199 | orchestrator | 2026-02-09 04:45:48 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-09 04:46:27.937213 | orchestrator | 2026-02-09 04:45:50 | INFO  | Waiting for image to leave queued state... 2026-02-09 04:46:27.937226 | orchestrator | 2026-02-09 04:45:52 | INFO  | Waiting for import to complete... 2026-02-09 04:46:27.937239 | orchestrator | 2026-02-09 04:46:03 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-09 04:46:27.937251 | orchestrator | 2026-02-09 04:46:03 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-09 04:46:27.937264 | orchestrator | 2026-02-09 04:46:03 | INFO  | Setting internal_version = 0.6.2 2026-02-09 04:46:27.937276 | orchestrator | 2026-02-09 04:46:03 | INFO  | Setting image_original_user = cirros 2026-02-09 04:46:27.937288 | orchestrator | 2026-02-09 04:46:03 | INFO  | Adding tag os:cirros 2026-02-09 04:46:27.937301 | orchestrator | 2026-02-09 04:46:03 | INFO  | Setting property architecture: x86_64 2026-02-09 04:46:27.937312 | orchestrator | 2026-02-09 04:46:04 | INFO  | Setting property hw_disk_bus: scsi 2026-02-09 04:46:27.937323 | orchestrator | 2026-02-09 04:46:04 | INFO  | Setting property hw_rng_model: virtio 2026-02-09 04:46:27.937335 | orchestrator | 2026-02-09 04:46:04 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-09 04:46:27.937346 | orchestrator | 2026-02-09 04:46:04 | INFO  | Setting property hw_watchdog_action: reset 2026-02-09 04:46:27.937357 | orchestrator | 2026-02-09 04:46:05 | INFO  | Setting property hypervisor_type: qemu 2026-02-09 04:46:27.937368 | orchestrator | 2026-02-09 04:46:05 | INFO  | Setting property os_distro: cirros 2026-02-09 04:46:27.937379 | orchestrator | 2026-02-09 04:46:05 | INFO  | Setting property os_purpose: minimal 2026-02-09 04:46:27.937391 | orchestrator | 2026-02-09 04:46:05 | INFO  | Setting property replace_frequency: never 2026-02-09 04:46:27.937403 | orchestrator | 2026-02-09 04:46:06 | INFO  | Setting property uuid_validity: none 2026-02-09 04:46:27.937415 | orchestrator | 2026-02-09 04:46:06 | INFO  | Setting property provided_until: none 2026-02-09 04:46:27.937428 | orchestrator | 2026-02-09 04:46:06 | INFO  | Setting property image_description: Cirros 2026-02-09 04:46:27.937440 | orchestrator | 2026-02-09 04:46:06 | INFO  | Setting property image_name: Cirros 2026-02-09 04:46:27.937451 | orchestrator | 2026-02-09 04:46:07 | INFO  | Setting property internal_version: 0.6.2 2026-02-09 04:46:27.937463 | orchestrator | 2026-02-09 04:46:07 | INFO  | Setting property image_original_user: cirros 2026-02-09 04:46:27.937524 | orchestrator | 2026-02-09 04:46:07 | INFO  | Setting property os_version: 0.6.2 2026-02-09 04:46:27.937549 | orchestrator | 2026-02-09 04:46:07 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-09 04:46:27.937564 | orchestrator | 2026-02-09 04:46:08 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-09 04:46:27.937576 | orchestrator | 2026-02-09 04:46:08 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-09 04:46:27.937588 | orchestrator | 2026-02-09 04:46:08 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-09 04:46:27.937600 | orchestrator | 2026-02-09 04:46:08 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-09 04:46:27.937612 | orchestrator | 2026-02-09 04:46:08 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-09 04:46:27.937629 | orchestrator | 2026-02-09 04:46:08 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-09 04:46:27.937641 | orchestrator | 2026-02-09 04:46:08 | INFO  | Importing image Cirros 0.6.3 2026-02-09 04:46:27.937652 | orchestrator | 2026-02-09 04:46:08 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-09 04:46:27.937663 | orchestrator | 2026-02-09 04:46:09 | INFO  | Waiting for image to leave queued state... 2026-02-09 04:46:27.937676 | orchestrator | 2026-02-09 04:46:11 | INFO  | Waiting for import to complete... 2026-02-09 04:46:27.937708 | orchestrator | 2026-02-09 04:46:21 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-09 04:46:27.937720 | orchestrator | 2026-02-09 04:46:21 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-09 04:46:27.937732 | orchestrator | 2026-02-09 04:46:21 | INFO  | Setting internal_version = 0.6.3 2026-02-09 04:46:27.937743 | orchestrator | 2026-02-09 04:46:21 | INFO  | Setting image_original_user = cirros 2026-02-09 04:46:27.937754 | orchestrator | 2026-02-09 04:46:21 | INFO  | Adding tag os:cirros 2026-02-09 04:46:27.937765 | orchestrator | 2026-02-09 04:46:22 | INFO  | Setting property architecture: x86_64 2026-02-09 04:46:27.937776 | orchestrator | 2026-02-09 04:46:22 | INFO  | Setting property hw_disk_bus: scsi 2026-02-09 04:46:27.937788 | orchestrator | 2026-02-09 04:46:22 | INFO  | Setting property hw_rng_model: virtio 2026-02-09 04:46:27.937799 | orchestrator | 2026-02-09 04:46:22 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-09 04:46:27.937811 | orchestrator | 2026-02-09 04:46:23 | INFO  | Setting property hw_watchdog_action: reset 2026-02-09 04:46:27.937823 | orchestrator | 2026-02-09 04:46:23 | INFO  | Setting property hypervisor_type: qemu 2026-02-09 04:46:27.937834 | orchestrator | 2026-02-09 04:46:23 | INFO  | Setting property os_distro: cirros 2026-02-09 04:46:27.937846 | orchestrator | 2026-02-09 04:46:23 | INFO  | Setting property os_purpose: minimal 2026-02-09 04:46:27.937858 | orchestrator | 2026-02-09 04:46:24 | INFO  | Setting property replace_frequency: never 2026-02-09 04:46:27.937869 | orchestrator | 2026-02-09 04:46:24 | INFO  | Setting property uuid_validity: none 2026-02-09 04:46:27.937882 | orchestrator | 2026-02-09 04:46:24 | INFO  | Setting property provided_until: none 2026-02-09 04:46:27.937893 | orchestrator | 2026-02-09 04:46:24 | INFO  | Setting property image_description: Cirros 2026-02-09 04:46:27.937906 | orchestrator | 2026-02-09 04:46:25 | INFO  | Setting property image_name: Cirros 2026-02-09 04:46:27.937917 | orchestrator | 2026-02-09 04:46:25 | INFO  | Setting property internal_version: 0.6.3 2026-02-09 04:46:27.937941 | orchestrator | 2026-02-09 04:46:25 | INFO  | Setting property image_original_user: cirros 2026-02-09 04:46:27.937953 | orchestrator | 2026-02-09 04:46:25 | INFO  | Setting property os_version: 0.6.3 2026-02-09 04:46:27.937965 | orchestrator | 2026-02-09 04:46:26 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-09 04:46:27.937978 | orchestrator | 2026-02-09 04:46:26 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-09 04:46:27.937989 | orchestrator | 2026-02-09 04:46:26 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-09 04:46:27.938000 | orchestrator | 2026-02-09 04:46:26 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-09 04:46:27.938011 | orchestrator | 2026-02-09 04:46:26 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-09 04:46:28.374696 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-09 04:46:30.977436 | orchestrator | 2026-02-09 04:46:30 | INFO  | date: 2026-02-09 2026-02-09 04:46:30.977592 | orchestrator | 2026-02-09 04:46:30 | INFO  | image: octavia-amphora-haproxy-2024.2.20260209.qcow2 2026-02-09 04:46:30.977634 | orchestrator | 2026-02-09 04:46:30 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260209.qcow2 2026-02-09 04:46:30.977649 | orchestrator | 2026-02-09 04:46:30 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260209.qcow2.CHECKSUM 2026-02-09 04:46:31.573420 | orchestrator | 2026-02-09 04:46:31 | INFO  | checksum: 86d119ebb4f8d601ae08bdbcc613b96848d9368e4080f00cc470e04c0256a2b7 2026-02-09 04:46:31.647734 | orchestrator | 2026-02-09 04:46:31 | INFO  | It takes a moment until task 480fff83-d645-4878-b7f7-6cd667091cbc (image-manager) has been started and output is visible here. 2026-02-09 04:47:54.372565 | orchestrator | 2026-02-09 04:46:34 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-09' 2026-02-09 04:47:54.372744 | orchestrator | 2026-02-09 04:46:34 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260209.qcow2: 200 2026-02-09 04:47:54.372765 | orchestrator | 2026-02-09 04:46:34 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-09 2026-02-09 04:47:54.372783 | orchestrator | 2026-02-09 04:46:34 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260209.qcow2 2026-02-09 04:47:54.372803 | orchestrator | 2026-02-09 04:46:35 | INFO  | Waiting for image to leave queued state... 2026-02-09 04:47:54.372822 | orchestrator | 2026-02-09 04:46:37 | INFO  | Waiting for import to complete... 2026-02-09 04:47:54.372842 | orchestrator | 2026-02-09 04:46:47 | INFO  | Waiting for import to complete... 2026-02-09 04:47:54.372862 | orchestrator | 2026-02-09 04:46:57 | INFO  | Waiting for import to complete... 2026-02-09 04:47:54.372881 | orchestrator | 2026-02-09 04:47:07 | INFO  | Waiting for import to complete... 2026-02-09 04:47:54.372902 | orchestrator | 2026-02-09 04:47:18 | INFO  | Waiting for import to complete... 2026-02-09 04:47:54.372915 | orchestrator | 2026-02-09 04:47:28 | INFO  | Waiting for import to complete... 2026-02-09 04:47:54.372926 | orchestrator | 2026-02-09 04:47:38 | INFO  | Waiting for import to complete... 2026-02-09 04:47:54.372938 | orchestrator | 2026-02-09 04:47:48 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-09' successfully completed, reloading images 2026-02-09 04:47:54.372950 | orchestrator | 2026-02-09 04:47:49 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-09' 2026-02-09 04:47:54.372989 | orchestrator | 2026-02-09 04:47:49 | INFO  | Setting internal_version = 2026-02-09 2026-02-09 04:47:54.373001 | orchestrator | 2026-02-09 04:47:49 | INFO  | Setting image_original_user = ubuntu 2026-02-09 04:47:54.373012 | orchestrator | 2026-02-09 04:47:49 | INFO  | Adding tag amphora 2026-02-09 04:47:54.373023 | orchestrator | 2026-02-09 04:47:49 | INFO  | Adding tag os:ubuntu 2026-02-09 04:47:54.373034 | orchestrator | 2026-02-09 04:47:49 | INFO  | Setting property architecture: x86_64 2026-02-09 04:47:54.373044 | orchestrator | 2026-02-09 04:47:49 | INFO  | Setting property hw_disk_bus: scsi 2026-02-09 04:47:54.373055 | orchestrator | 2026-02-09 04:47:49 | INFO  | Setting property hw_rng_model: virtio 2026-02-09 04:47:54.373066 | orchestrator | 2026-02-09 04:47:50 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-09 04:47:54.373085 | orchestrator | 2026-02-09 04:47:50 | INFO  | Setting property hw_watchdog_action: reset 2026-02-09 04:47:54.373103 | orchestrator | 2026-02-09 04:47:50 | INFO  | Setting property hypervisor_type: qemu 2026-02-09 04:47:54.373122 | orchestrator | 2026-02-09 04:47:50 | INFO  | Setting property os_distro: ubuntu 2026-02-09 04:47:54.373139 | orchestrator | 2026-02-09 04:47:51 | INFO  | Setting property replace_frequency: quarterly 2026-02-09 04:47:54.373156 | orchestrator | 2026-02-09 04:47:51 | INFO  | Setting property uuid_validity: last-1 2026-02-09 04:47:54.373167 | orchestrator | 2026-02-09 04:47:51 | INFO  | Setting property provided_until: none 2026-02-09 04:47:54.373178 | orchestrator | 2026-02-09 04:47:52 | INFO  | Setting property os_purpose: network 2026-02-09 04:47:54.373205 | orchestrator | 2026-02-09 04:47:52 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-09 04:47:54.373217 | orchestrator | 2026-02-09 04:47:52 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-09 04:47:54.373228 | orchestrator | 2026-02-09 04:47:52 | INFO  | Setting property internal_version: 2026-02-09 2026-02-09 04:47:54.373239 | orchestrator | 2026-02-09 04:47:52 | INFO  | Setting property image_original_user: ubuntu 2026-02-09 04:47:54.373249 | orchestrator | 2026-02-09 04:47:53 | INFO  | Setting property os_version: 2026-02-09 2026-02-09 04:47:54.373269 | orchestrator | 2026-02-09 04:47:53 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260209.qcow2 2026-02-09 04:47:54.373286 | orchestrator | 2026-02-09 04:47:53 | INFO  | Setting property image_build_date: 2026-02-09 2026-02-09 04:47:54.373302 | orchestrator | 2026-02-09 04:47:53 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-09' 2026-02-09 04:47:54.373342 | orchestrator | 2026-02-09 04:47:53 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-09' 2026-02-09 04:47:54.373361 | orchestrator | 2026-02-09 04:47:54 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-09 04:47:54.373377 | orchestrator | 2026-02-09 04:47:54 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-09 04:47:54.373397 | orchestrator | 2026-02-09 04:47:54 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-09 04:47:54.373415 | orchestrator | 2026-02-09 04:47:54 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-09 04:47:54.982145 | orchestrator | ok: Runtime: 0:03:21.157605 2026-02-09 04:47:55.001132 | 2026-02-09 04:47:55.001300 | TASK [Run checks] 2026-02-09 04:47:55.797671 | orchestrator | + set -e 2026-02-09 04:47:55.797885 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 04:47:55.797912 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 04:47:55.797935 | orchestrator | ++ INTERACTIVE=false 2026-02-09 04:47:55.797949 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 04:47:55.797962 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 04:47:55.797976 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-09 04:47:55.798454 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-09 04:47:55.804571 | orchestrator | 2026-02-09 04:47:55.804649 | orchestrator | # CHECK 2026-02-09 04:47:55.804673 | orchestrator | 2026-02-09 04:47:55.804692 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 04:47:55.804713 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 04:47:55.804725 | orchestrator | + echo 2026-02-09 04:47:55.804736 | orchestrator | + echo '# CHECK' 2026-02-09 04:47:55.804747 | orchestrator | + echo 2026-02-09 04:47:55.804764 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-09 04:47:55.805645 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-09 04:47:55.873145 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-09 04:47:55.873261 | orchestrator | 2026-02-09 04:47:55.873290 | orchestrator | ## Containers @ testbed-manager 2026-02-09 04:47:55.873311 | orchestrator | 2026-02-09 04:47:55.873329 | orchestrator | + echo 2026-02-09 04:47:55.873346 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-09 04:47:55.873365 | orchestrator | + echo 2026-02-09 04:47:55.873383 | orchestrator | + osism container testbed-manager ps 2026-02-09 04:47:57.994495 | orchestrator | 2026-02-09 04:47:57 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-09 04:47:58.384049 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-09 04:47:58.384181 | orchestrator | 7ae1d4dc0575 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-09 04:47:58.384207 | orchestrator | 12a1ccaa07aa registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-09 04:47:58.384220 | orchestrator | 82b067ea999d registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-09 04:47:58.384232 | orchestrator | 8ba7fff49c5d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-09 04:47:58.384243 | orchestrator | 2ec07478b1ee registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-09 04:47:58.384259 | orchestrator | 498540129c44 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 57 minutes ago Up 57 minutes cephclient 2026-02-09 04:47:58.384272 | orchestrator | 7d959d8e937d registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-09 04:47:58.384283 | orchestrator | 8fdb34c1d9ef registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-09 04:47:58.384323 | orchestrator | ca6459500ec3 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-09 04:47:58.384335 | orchestrator | e836b22db5ac registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-09 04:47:58.384346 | orchestrator | bc6659c613ce phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-09 04:47:58.384357 | orchestrator | d534a6d26cc7 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-09 04:47:58.384369 | orchestrator | 02530b5b4de2 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-09 04:47:58.384381 | orchestrator | 621a8293af12 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-09 04:47:58.384414 | orchestrator | a532b6c52fd8 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-09 04:47:58.384436 | orchestrator | 73e060232684 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-09 04:47:58.384447 | orchestrator | 66716244372d registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-09 04:47:58.384458 | orchestrator | 7f62dce4500f registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-09 04:47:58.384470 | orchestrator | 35ff5bea5353 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-09 04:47:58.384481 | orchestrator | b7c9fc5e2e7f registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-09 04:47:58.384492 | orchestrator | 896e0c9ce58b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-09 04:47:58.384504 | orchestrator | da8e12e56414 registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-09 04:47:58.384523 | orchestrator | 625f555919bf registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-09 04:47:58.384534 | orchestrator | 7c57f2e1c68b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-09 04:47:58.384545 | orchestrator | 7e046376b885 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-09 04:47:58.384557 | orchestrator | c4886cfd0752 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-09 04:47:58.384568 | orchestrator | 7f71bffc8ee4 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-09 04:47:58.384579 | orchestrator | a4c31015641e registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-09 04:47:58.384590 | orchestrator | d2f1e5a20bcb registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-09 04:47:58.384607 | orchestrator | c7c5e977c1fd registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-09 04:47:58.735426 | orchestrator | 2026-02-09 04:47:58.735536 | orchestrator | ## Images @ testbed-manager 2026-02-09 04:47:58.735554 | orchestrator | 2026-02-09 04:47:58.735566 | orchestrator | + echo 2026-02-09 04:47:58.735577 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-09 04:47:58.735589 | orchestrator | + echo 2026-02-09 04:47:58.735606 | orchestrator | + osism container testbed-manager images 2026-02-09 04:48:01.198939 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-09 04:48:01.199084 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 3ff6d4c697b2 25 hours ago 238MB 2026-02-09 04:48:01.199104 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 12 days ago 41.4MB 2026-02-09 04:48:01.199118 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-09 04:48:01.199130 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-09 04:48:01.199141 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-09 04:48:01.199152 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-09 04:48:01.199164 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-09 04:48:01.199177 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-09 04:48:01.199188 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-09 04:48:01.199226 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-09 04:48:01.199237 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-09 04:48:01.199248 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-09 04:48:01.199259 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-09 04:48:01.199270 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-09 04:48:01.199281 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-09 04:48:01.199292 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-09 04:48:01.199302 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-09 04:48:01.199313 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-09 04:48:01.199324 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 2 months ago 334MB 2026-02-09 04:48:01.199335 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 months ago 742MB 2026-02-09 04:48:01.199346 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-09 04:48:01.199356 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-09 04:48:01.199367 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-09 04:48:01.199378 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-09 04:48:01.199389 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-09 04:48:01.552487 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-09 04:48:01.553115 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-09 04:48:01.617161 | orchestrator | 2026-02-09 04:48:01.617279 | orchestrator | ## Containers @ testbed-node-0 2026-02-09 04:48:01.617308 | orchestrator | 2026-02-09 04:48:01.617327 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-09 04:48:01.617345 | orchestrator | + echo 2026-02-09 04:48:01.617362 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-09 04:48:01.617383 | orchestrator | + echo 2026-02-09 04:48:01.617402 | orchestrator | + osism container testbed-node-0 ps 2026-02-09 04:48:04.312069 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-09 04:48:04.312182 | orchestrator | 45eea551ab3d registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-09 04:48:04.312223 | orchestrator | 2f33e3e14c5a registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-09 04:48:04.312236 | orchestrator | 4c51f82e9a91 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 7 minutes grafana 2026-02-09 04:48:04.312247 | orchestrator | 01d9f986c6ac registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-09 04:48:04.312286 | orchestrator | 257aee283a97 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-09 04:48:04.312298 | orchestrator | e9f3110fc798 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-09 04:48:04.312316 | orchestrator | 2a29a3f2d4df registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-09 04:48:04.312327 | orchestrator | e9201f855202 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-09 04:48:04.312338 | orchestrator | 8d2112ecc66c registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-09 04:48:04.312350 | orchestrator | 542d950be9d1 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-09 04:48:04.312361 | orchestrator | 8f11ff6683af registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-09 04:48:04.312372 | orchestrator | a2be7f3dd835 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-09 04:48:04.312383 | orchestrator | 639c2ed61066 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-09 04:48:04.312394 | orchestrator | 2c0c1efe6341 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-09 04:48:04.312405 | orchestrator | f3877ae46409 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-09 04:48:04.312416 | orchestrator | 073f0c9b8687 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-09 04:48:04.312427 | orchestrator | bf890ee10c04 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-09 04:48:04.312437 | orchestrator | 76362f9933d6 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-09 04:48:04.312448 | orchestrator | b41d1baf85ce registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-09 04:48:04.312486 | orchestrator | 26c070df9ff6 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-09 04:48:04.312498 | orchestrator | c1a2d1256613 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-09 04:48:04.312510 | orchestrator | 74ebfea9cc8c registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-09 04:48:04.312528 | orchestrator | 74e839d47c4a registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-09 04:48:04.312539 | orchestrator | c8fcc92ae340 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-09 04:48:04.312550 | orchestrator | 00ec6e290802 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-09 04:48:04.312567 | orchestrator | b7313e70b957 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-09 04:48:04.312578 | orchestrator | 72a2b3709e83 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-09 04:48:04.312589 | orchestrator | 1aa87e1380d4 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-09 04:48:04.312600 | orchestrator | 336afe40445b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-09 04:48:04.312610 | orchestrator | b77fc7300653 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-09 04:48:04.312622 | orchestrator | 96d8ab5690be registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-09 04:48:04.312708 | orchestrator | 6fb8fab92dd4 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) barbican_api 2026-02-09 04:48:04.312719 | orchestrator | 2f52638aa9f8 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-09 04:48:04.312730 | orchestrator | 4e80a8e436e2 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-09 04:48:04.312741 | orchestrator | 2277df46ee7c registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-09 04:48:04.312752 | orchestrator | babaffd2d22a registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-09 04:48:04.312763 | orchestrator | 5590a75d7ce3 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-09 04:48:04.312773 | orchestrator | 05528059273d registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-09 04:48:04.312785 | orchestrator | 76b0d3d66506 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-09 04:48:04.312805 | orchestrator | fef6c2e16ba3 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-09 04:48:04.312825 | orchestrator | 33cc4dbb6be6 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-09 04:48:04.312837 | orchestrator | 1a75e80e88bf registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-09 04:48:04.312854 | orchestrator | 007aae49d6fc registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-09 04:48:04.312865 | orchestrator | 046748ab2e26 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-09 04:48:04.312876 | orchestrator | 6c73840820eb registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-09 04:48:04.312887 | orchestrator | 71209f6c7230 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-09 04:48:04.312898 | orchestrator | 4d530035bd9e registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-09 04:48:04.312909 | orchestrator | 7f689195d907 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-09 04:48:04.312919 | orchestrator | 7444b3dc63c0 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-09 04:48:04.312930 | orchestrator | d5cd74fc19d3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-0 2026-02-09 04:48:04.312941 | orchestrator | 1978eb48ce05 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-09 04:48:04.312952 | orchestrator | a495b1786f93 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-09 04:48:04.312963 | orchestrator | dc91775ce889 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-09 04:48:04.312973 | orchestrator | 82630c65f2da registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-09 04:48:04.312984 | orchestrator | 817342cbd9f7 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-09 04:48:04.312995 | orchestrator | 9173dceab17d registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-09 04:48:04.313011 | orchestrator | 8a50da2c5e6d registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-09 04:48:04.313022 | orchestrator | 3d4c6721dfd1 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-09 04:48:04.313039 | orchestrator | 41f16647b784 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-09 04:48:04.313056 | orchestrator | 4cac1478a040 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-09 04:48:04.313068 | orchestrator | 96005446ae42 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-09 04:48:04.313079 | orchestrator | 8bca45c01e0a registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-09 04:48:04.313089 | orchestrator | f4adf63716e7 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-09 04:48:04.313100 | orchestrator | dc123f3ec53d registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-09 04:48:04.313111 | orchestrator | 2d734e2aca9d registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-09 04:48:04.313122 | orchestrator | 75585416d856 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-09 04:48:04.313133 | orchestrator | 658f0a11fdd5 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-09 04:48:04.313144 | orchestrator | 8f277c5ba762 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-09 04:48:04.313155 | orchestrator | 6931598cf41b registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-09 04:48:04.313166 | orchestrator | 71a3a835aaa2 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-09 04:48:04.313177 | orchestrator | 14a8de54d894 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-09 04:48:04.729308 | orchestrator | 2026-02-09 04:48:04.729439 | orchestrator | ## Images @ testbed-node-0 2026-02-09 04:48:04.729460 | orchestrator | 2026-02-09 04:48:04.729473 | orchestrator | + echo 2026-02-09 04:48:04.729484 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-09 04:48:04.729509 | orchestrator | + echo 2026-02-09 04:48:04.729520 | orchestrator | + osism container testbed-node-0 images 2026-02-09 04:48:07.358961 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-09 04:48:07.359101 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-09 04:48:07.359118 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-09 04:48:07.359129 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-09 04:48:07.359141 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-09 04:48:07.359176 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-09 04:48:07.359188 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-09 04:48:07.359218 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-09 04:48:07.359229 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-09 04:48:07.359240 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-09 04:48:07.359251 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-09 04:48:07.359262 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-09 04:48:07.359272 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-09 04:48:07.359283 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-09 04:48:07.359294 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-09 04:48:07.359305 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-09 04:48:07.359316 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-09 04:48:07.359326 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-09 04:48:07.359337 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-09 04:48:07.359348 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-09 04:48:07.359358 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-09 04:48:07.359369 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-09 04:48:07.359380 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-09 04:48:07.359391 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-09 04:48:07.359402 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-09 04:48:07.359412 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-09 04:48:07.359423 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-09 04:48:07.359434 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-09 04:48:07.359451 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-09 04:48:07.359462 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-09 04:48:07.359473 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-09 04:48:07.359492 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-09 04:48:07.359523 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-09 04:48:07.359535 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-09 04:48:07.359546 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-09 04:48:07.359557 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-09 04:48:07.359568 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-09 04:48:07.359579 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-09 04:48:07.359590 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-09 04:48:07.359601 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-09 04:48:07.359612 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-09 04:48:07.359623 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-09 04:48:07.359662 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-09 04:48:07.359674 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-09 04:48:07.359684 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-09 04:48:07.359695 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-09 04:48:07.359706 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-09 04:48:07.359717 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-09 04:48:07.359728 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-09 04:48:07.359739 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-09 04:48:07.359749 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-09 04:48:07.359760 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-09 04:48:07.359771 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-09 04:48:07.359781 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-09 04:48:07.359792 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-09 04:48:07.359803 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-09 04:48:07.359814 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-09 04:48:07.359831 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-09 04:48:07.359842 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-09 04:48:07.359859 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-09 04:48:07.359870 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-09 04:48:07.359887 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-09 04:48:07.359905 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-09 04:48:07.359929 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-09 04:48:07.359967 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-09 04:48:07.359986 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-09 04:48:07.360004 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-09 04:48:07.360022 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-09 04:48:07.360041 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-09 04:48:07.360059 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-09 04:48:07.772013 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-09 04:48:07.773032 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-09 04:48:07.852498 | orchestrator | 2026-02-09 04:48:07.852600 | orchestrator | ## Containers @ testbed-node-1 2026-02-09 04:48:07.852652 | orchestrator | 2026-02-09 04:48:07.852729 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-09 04:48:07.852744 | orchestrator | + echo 2026-02-09 04:48:07.852757 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-09 04:48:07.852769 | orchestrator | + echo 2026-02-09 04:48:07.852781 | orchestrator | + osism container testbed-node-1 ps 2026-02-09 04:48:10.355260 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-09 04:48:10.355337 | orchestrator | dc9ed625155a registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-09 04:48:10.355347 | orchestrator | 1e613d668f09 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-09 04:48:10.355354 | orchestrator | 5358f8ead4b6 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-09 04:48:10.355361 | orchestrator | 38b4730ade0e registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-09 04:48:10.355369 | orchestrator | 6df85cb43936 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-09 04:48:10.355375 | orchestrator | 8ed503fcc8b0 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-09 04:48:10.355400 | orchestrator | d9661dae66e7 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-09 04:48:10.355407 | orchestrator | 509b8dc5a50c registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-09 04:48:10.355414 | orchestrator | 70cd8252f756 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-09 04:48:10.355420 | orchestrator | 0df75affb2b5 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-09 04:48:10.355426 | orchestrator | dd60d4dd6276 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-09 04:48:10.355433 | orchestrator | 142c0b778b42 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-09 04:48:10.355452 | orchestrator | a2c18d769d3d registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-09 04:48:10.355459 | orchestrator | 819ced801dee registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-09 04:48:10.355465 | orchestrator | 39283416aab2 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-09 04:48:10.355471 | orchestrator | 068aff093cfe registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-09 04:48:10.355478 | orchestrator | b1d1cf69cd7a registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-09 04:48:10.355484 | orchestrator | 943833dc57b3 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-09 04:48:10.355490 | orchestrator | 52391bee95d2 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-09 04:48:10.355510 | orchestrator | 4143e6850819 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-09 04:48:10.355517 | orchestrator | e4e7784401b6 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-09 04:48:10.355523 | orchestrator | 6caac46ade9d registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-09 04:48:10.355529 | orchestrator | dea27a4a6fd4 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-09 04:48:10.355535 | orchestrator | 137711dc0a20 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-09 04:48:10.355548 | orchestrator | 5bfa058d2e18 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-09 04:48:10.355558 | orchestrator | 54f723b7cd95 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-09 04:48:10.355568 | orchestrator | 3e0fe468af47 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-09 04:48:10.355579 | orchestrator | 0e7adbd59efc registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-09 04:48:10.355589 | orchestrator | 15006e2fbcd7 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-09 04:48:10.355598 | orchestrator | b39d71a3ec0b registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-09 04:48:10.355608 | orchestrator | 53bb9d0bdff0 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-09 04:48:10.355619 | orchestrator | f86d312298b1 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-09 04:48:10.355629 | orchestrator | eb6646ce0175 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-09 04:48:10.355666 | orchestrator | ffbf1f554b87 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-09 04:48:10.355677 | orchestrator | 3069d30d76b8 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-09 04:48:10.355688 | orchestrator | 49e015d64e45 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-09 04:48:10.355707 | orchestrator | 5f1c2a6f6739 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-09 04:48:10.355718 | orchestrator | a2f845ff0b87 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-09 04:48:10.355729 | orchestrator | c7c937cc3000 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-09 04:48:10.355748 | orchestrator | 408cd14be45c registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-09 04:48:10.355759 | orchestrator | cb19d2866745 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-09 04:48:10.355778 | orchestrator | 379c7be8a983 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-09 04:48:10.355789 | orchestrator | bb6c5e525e14 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-09 04:48:10.355799 | orchestrator | 0f6526b056ae registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-09 04:48:10.355809 | orchestrator | bb4b5de03677 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-09 04:48:10.355819 | orchestrator | 2665d72fb4d2 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-09 04:48:10.355830 | orchestrator | 4a4a97db5f5e registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-09 04:48:10.355840 | orchestrator | 54ab025dd6d3 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-09 04:48:10.355851 | orchestrator | a5696fcad136 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-09 04:48:10.355863 | orchestrator | 9f1bf912eb4c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-1 2026-02-09 04:48:10.355874 | orchestrator | 4dda0932203c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-09 04:48:10.355885 | orchestrator | ab15bd6989cf registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-09 04:48:10.355896 | orchestrator | cbb061b1238a registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-09 04:48:10.355906 | orchestrator | 53c1d0b963c1 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-09 04:48:10.355918 | orchestrator | 253e7a90ef2f registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-09 04:48:10.355929 | orchestrator | 9d447767cfa8 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-09 04:48:10.355939 | orchestrator | ffdddd93d5ea registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-09 04:48:10.355950 | orchestrator | c139418920ac registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-09 04:48:10.355959 | orchestrator | 0d1d7b7fb8a2 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-09 04:48:10.355982 | orchestrator | 96be4fad411b registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-09 04:48:10.355993 | orchestrator | 98093bdf429e registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-09 04:48:10.356003 | orchestrator | b7351e8607fd registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-09 04:48:10.356014 | orchestrator | 62aa9eb055ba registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-09 04:48:10.356025 | orchestrator | aa01563750f0 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-09 04:48:10.356042 | orchestrator | 6fc26d9620a5 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-09 04:48:10.356053 | orchestrator | 9e3fb40a74db registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-09 04:48:10.356063 | orchestrator | e654328e8c01 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-09 04:48:10.356074 | orchestrator | 7fb4596a8ea4 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-09 04:48:10.356085 | orchestrator | f9615dfe8670 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-09 04:48:10.356100 | orchestrator | 9555310fd8da registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-09 04:48:10.356111 | orchestrator | cdc3b41a8eee registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-09 04:48:10.737389 | orchestrator | 2026-02-09 04:48:10.737479 | orchestrator | ## Images @ testbed-node-1 2026-02-09 04:48:10.737493 | orchestrator | 2026-02-09 04:48:10.737502 | orchestrator | + echo 2026-02-09 04:48:10.737511 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-09 04:48:10.737521 | orchestrator | + echo 2026-02-09 04:48:10.737530 | orchestrator | + osism container testbed-node-1 images 2026-02-09 04:48:13.486428 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-09 04:48:13.486514 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-09 04:48:13.486524 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-09 04:48:13.486531 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-09 04:48:13.486539 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-09 04:48:13.486545 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-09 04:48:13.486551 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-09 04:48:13.486576 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-09 04:48:13.486583 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-09 04:48:13.486589 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-09 04:48:13.486595 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-09 04:48:13.486601 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-09 04:48:13.486607 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-09 04:48:13.486613 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-09 04:48:13.486620 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-09 04:48:13.486626 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-09 04:48:13.486632 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-09 04:48:13.486691 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-09 04:48:13.486699 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-09 04:48:13.486705 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-09 04:48:13.486711 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-09 04:48:13.486717 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-09 04:48:13.486723 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-09 04:48:13.486729 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-09 04:48:13.486735 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-09 04:48:13.486741 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-09 04:48:13.486748 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-09 04:48:13.486754 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-09 04:48:13.486760 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-09 04:48:13.486766 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-09 04:48:13.486772 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-09 04:48:13.486779 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-09 04:48:13.486800 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-09 04:48:13.486814 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-09 04:48:13.486820 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-09 04:48:13.486826 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-09 04:48:13.486832 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-09 04:48:13.486838 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-09 04:48:13.486859 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-09 04:48:13.486866 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-09 04:48:13.486872 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-09 04:48:13.486878 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-09 04:48:13.486884 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-09 04:48:13.486890 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-09 04:48:13.486897 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-09 04:48:13.486903 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-09 04:48:13.486909 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-09 04:48:13.486915 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-09 04:48:13.486921 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-09 04:48:13.486928 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-09 04:48:13.486934 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-09 04:48:13.486940 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-09 04:48:13.486946 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-09 04:48:13.486952 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-09 04:48:13.486958 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-09 04:48:13.486964 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-09 04:48:13.486971 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-09 04:48:13.486977 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-09 04:48:13.486985 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-09 04:48:13.486992 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-09 04:48:13.487004 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-09 04:48:13.487012 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-09 04:48:13.487019 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-09 04:48:13.487027 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-09 04:48:13.487040 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-09 04:48:13.487047 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-09 04:48:13.487054 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-09 04:48:13.487062 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-09 04:48:13.487069 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-09 04:48:13.487077 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-09 04:48:13.889072 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-09 04:48:13.889376 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-09 04:48:13.944753 | orchestrator | 2026-02-09 04:48:13.944852 | orchestrator | ## Containers @ testbed-node-2 2026-02-09 04:48:13.944868 | orchestrator | 2026-02-09 04:48:13.944880 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-09 04:48:13.944891 | orchestrator | + echo 2026-02-09 04:48:13.944903 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-09 04:48:13.944915 | orchestrator | + echo 2026-02-09 04:48:13.944926 | orchestrator | + osism container testbed-node-2 ps 2026-02-09 04:48:16.503453 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-09 04:48:16.503564 | orchestrator | cc8ef5479b74 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-09 04:48:16.503582 | orchestrator | 388e2c8457f1 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-09 04:48:16.503594 | orchestrator | 1d70653c48ab registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-09 04:48:16.503605 | orchestrator | 73677a4d395e registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-09 04:48:16.503618 | orchestrator | 29603c8d4bde registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-09 04:48:16.503629 | orchestrator | e3b1bd6b3a8d registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-09 04:48:16.503686 | orchestrator | d2159047c81b registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-09 04:48:16.503703 | orchestrator | ae28ee3d2abd registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-09 04:48:16.503756 | orchestrator | 9d0236ffa883 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-09 04:48:16.503779 | orchestrator | 9a4a48b9907e registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-09 04:48:16.503799 | orchestrator | bb52d4d1fc80 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-09 04:48:16.503820 | orchestrator | f5321bc8fee0 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-09 04:48:16.503888 | orchestrator | d73bbdafdfd0 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-09 04:48:16.503902 | orchestrator | ec72e2ab449d registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-09 04:48:16.503913 | orchestrator | 22e1941ed057 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-09 04:48:16.503924 | orchestrator | cc33a8cbfbda registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-09 04:48:16.503935 | orchestrator | 25dc1143649d registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-09 04:48:16.503946 | orchestrator | f0e215e0a293 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-09 04:48:16.503957 | orchestrator | a38149eacc27 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-09 04:48:16.503988 | orchestrator | 2eb62154c815 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-09 04:48:16.504000 | orchestrator | dc22633a4ef3 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-09 04:48:16.504011 | orchestrator | c32978a019fa registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-09 04:48:16.504022 | orchestrator | 10b6f1b81763 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-09 04:48:16.504033 | orchestrator | 4544aa3a21f7 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-09 04:48:16.504044 | orchestrator | 197e0dcd17f0 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-09 04:48:16.504064 | orchestrator | 49fe13feb289 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-09 04:48:16.504075 | orchestrator | 2d66605b3c4b registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-09 04:48:16.504093 | orchestrator | dd8a44a228bd registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-09 04:48:16.504112 | orchestrator | 47a59aa7e6e4 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-09 04:48:16.504131 | orchestrator | ff9031f658d2 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-09 04:48:16.504148 | orchestrator | 3217ace7ceeb registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-09 04:48:16.504167 | orchestrator | a96c58ffd393 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-09 04:48:16.504185 | orchestrator | 911fd162bb83 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-09 04:48:16.504204 | orchestrator | 1cc170e8c0ce registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-09 04:48:16.504223 | orchestrator | e6462fbe1232 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-09 04:48:16.504241 | orchestrator | 434b5afcfed9 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-09 04:48:16.504259 | orchestrator | 9e7106186f72 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-09 04:48:16.504270 | orchestrator | 71228c20fff4 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-09 04:48:16.504281 | orchestrator | 83f5e1df38fa registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-09 04:48:16.504301 | orchestrator | cabde9a73016 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-09 04:48:16.504312 | orchestrator | 70502e2caca6 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-09 04:48:16.504323 | orchestrator | 85a86aabf9b5 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-09 04:48:16.504334 | orchestrator | c155907677db registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-09 04:48:16.504355 | orchestrator | c427cf00a1d3 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-09 04:48:16.504366 | orchestrator | e56cccc24a1f registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-09 04:48:16.504376 | orchestrator | c3bcba1373cf registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-09 04:48:16.504387 | orchestrator | f972e9996732 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone 2026-02-09 04:48:16.504398 | orchestrator | 965a1422ed25 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 53 minutes (healthy) keystone_fernet 2026-02-09 04:48:16.504409 | orchestrator | f503885b865d registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-09 04:48:16.504420 | orchestrator | 229a1e7f284a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-2 2026-02-09 04:48:16.504431 | orchestrator | e07b265e980f registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-09 04:48:16.504448 | orchestrator | 08d9b4f0b230 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-09 04:48:16.504460 | orchestrator | 293d4e0c8454 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-09 04:48:16.504478 | orchestrator | 41c75fe8070a registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-09 04:48:16.504497 | orchestrator | c1852255da1f registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-09 04:48:16.504515 | orchestrator | 1174029476d1 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-09 04:48:16.504534 | orchestrator | 707da48da9d5 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-09 04:48:16.504552 | orchestrator | dcb49f231d63 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-09 04:48:16.504571 | orchestrator | c78969312f91 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-09 04:48:16.504597 | orchestrator | fcd86f564c89 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-09 04:48:16.504609 | orchestrator | b6a596dcdb12 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-09 04:48:16.504628 | orchestrator | 0c97f5d12e80 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-09 04:48:16.504640 | orchestrator | 8917d223e929 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-09 04:48:16.504742 | orchestrator | 4b4149ed9317 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-09 04:48:16.504754 | orchestrator | be6084077f22 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-09 04:48:16.504765 | orchestrator | c41e62ea4a3c registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-09 04:48:16.504776 | orchestrator | 9c8e0c700faf registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-09 04:48:16.504787 | orchestrator | 51670d0215f9 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-09 04:48:16.504798 | orchestrator | 24df45a531d6 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-09 04:48:16.504809 | orchestrator | 7cf0af18c6b8 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-09 04:48:16.504820 | orchestrator | 4600362838da registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-09 04:48:16.855908 | orchestrator | 2026-02-09 04:48:16.856001 | orchestrator | ## Images @ testbed-node-2 2026-02-09 04:48:16.856017 | orchestrator | 2026-02-09 04:48:16.856028 | orchestrator | + echo 2026-02-09 04:48:16.856040 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-09 04:48:16.856051 | orchestrator | + echo 2026-02-09 04:48:16.856063 | orchestrator | + osism container testbed-node-2 images 2026-02-09 04:48:19.324075 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-09 04:48:19.324175 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-09 04:48:19.324188 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-09 04:48:19.324199 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-09 04:48:19.324251 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-09 04:48:19.324263 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-09 04:48:19.324272 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-09 04:48:19.324280 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-09 04:48:19.324289 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-09 04:48:19.324321 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-09 04:48:19.324330 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-09 04:48:19.324343 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-09 04:48:19.324352 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-09 04:48:19.324361 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-09 04:48:19.324369 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-09 04:48:19.324378 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-09 04:48:19.324386 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-09 04:48:19.324395 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-09 04:48:19.324403 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-09 04:48:19.324411 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-09 04:48:19.324420 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-09 04:48:19.324428 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-09 04:48:19.324436 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-09 04:48:19.324445 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-09 04:48:19.324453 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-09 04:48:19.324462 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-09 04:48:19.324470 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-09 04:48:19.324479 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-09 04:48:19.324487 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-09 04:48:19.324495 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-09 04:48:19.324504 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-09 04:48:19.324512 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-09 04:48:19.324536 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-09 04:48:19.324546 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-09 04:48:19.324556 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-09 04:48:19.324565 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-09 04:48:19.324582 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-09 04:48:19.324592 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-09 04:48:19.324602 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-09 04:48:19.324619 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-09 04:48:19.324630 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-09 04:48:19.324639 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-09 04:48:19.324676 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-09 04:48:19.324686 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-09 04:48:19.324696 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-09 04:48:19.324706 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-09 04:48:19.324716 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-09 04:48:19.324725 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-09 04:48:19.324735 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-09 04:48:19.324745 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-09 04:48:19.324755 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-09 04:48:19.324765 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-09 04:48:19.324775 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-09 04:48:19.324785 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-09 04:48:19.324794 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-09 04:48:19.324804 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-09 04:48:19.324813 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-09 04:48:19.324823 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-09 04:48:19.324833 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-09 04:48:19.324843 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-09 04:48:19.324852 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-09 04:48:19.324862 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-09 04:48:19.324877 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-09 04:48:19.324887 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-09 04:48:19.324928 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-09 04:48:19.324940 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-09 04:48:19.324950 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-09 04:48:19.324970 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-09 04:48:19.324985 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-09 04:48:19.324994 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-09 04:48:19.667464 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-09 04:48:19.675553 | orchestrator | + set -e 2026-02-09 04:48:19.675590 | orchestrator | + source /opt/manager-vars.sh 2026-02-09 04:48:19.675604 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-09 04:48:19.675616 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-09 04:48:19.675627 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-09 04:48:19.675638 | orchestrator | ++ CEPH_VERSION=reef 2026-02-09 04:48:19.675672 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-09 04:48:19.675686 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-09 04:48:19.675697 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 04:48:19.675708 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 04:48:19.675719 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-09 04:48:19.675730 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-09 04:48:19.675741 | orchestrator | ++ export ARA=false 2026-02-09 04:48:19.675752 | orchestrator | ++ ARA=false 2026-02-09 04:48:19.675763 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-09 04:48:19.675774 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-09 04:48:19.675784 | orchestrator | ++ export TEMPEST=false 2026-02-09 04:48:19.675795 | orchestrator | ++ TEMPEST=false 2026-02-09 04:48:19.675806 | orchestrator | ++ export IS_ZUUL=true 2026-02-09 04:48:19.675817 | orchestrator | ++ IS_ZUUL=true 2026-02-09 04:48:19.675828 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 04:48:19.675839 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 04:48:19.675849 | orchestrator | ++ export EXTERNAL_API=false 2026-02-09 04:48:19.675860 | orchestrator | ++ EXTERNAL_API=false 2026-02-09 04:48:19.675871 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-09 04:48:19.675882 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-09 04:48:19.675893 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-09 04:48:19.675904 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-09 04:48:19.675915 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-09 04:48:19.675926 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-09 04:48:19.675937 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-09 04:48:19.675948 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-09 04:48:19.684989 | orchestrator | + set -e 2026-02-09 04:48:19.685072 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 04:48:19.685088 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 04:48:19.685102 | orchestrator | ++ INTERACTIVE=false 2026-02-09 04:48:19.685120 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 04:48:19.685143 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 04:48:19.685402 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-09 04:48:19.685895 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-09 04:48:19.693537 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 04:48:19.693618 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 04:48:19.693932 | orchestrator | 2026-02-09 04:48:19.694085 | orchestrator | + echo 2026-02-09 04:48:19.698519 | orchestrator | # Ceph status 2026-02-09 04:48:19.698583 | orchestrator | 2026-02-09 04:48:19.698595 | orchestrator | + echo '# Ceph status' 2026-02-09 04:48:19.698629 | orchestrator | + echo 2026-02-09 04:48:19.698638 | orchestrator | + ceph -s 2026-02-09 04:48:20.363425 | orchestrator | cluster: 2026-02-09 04:48:20.363554 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-09 04:48:20.363572 | orchestrator | health: HEALTH_OK 2026-02-09 04:48:20.363585 | orchestrator | 2026-02-09 04:48:20.363597 | orchestrator | services: 2026-02-09 04:48:20.363608 | orchestrator | mon: 3 daemons, quorum testbed-node-2,testbed-node-0,testbed-node-1 (age 68m) 2026-02-09 04:48:20.363633 | orchestrator | mgr: testbed-node-2(active, since 55m), standbys: testbed-node-1, testbed-node-0 2026-02-09 04:48:20.363645 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-09 04:48:20.363704 | orchestrator | osd: 6 osds: 6 up (since 64m), 6 in (since 65m) 2026-02-09 04:48:20.363716 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-09 04:48:20.363727 | orchestrator | 2026-02-09 04:48:20.363738 | orchestrator | data: 2026-02-09 04:48:20.363749 | orchestrator | volumes: 1/1 healthy 2026-02-09 04:48:20.363760 | orchestrator | pools: 14 pools, 401 pgs 2026-02-09 04:48:20.363771 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-09 04:48:20.363782 | orchestrator | usage: 7.0 GiB used, 113 GiB / 120 GiB avail 2026-02-09 04:48:20.363793 | orchestrator | pgs: 401 active+clean 2026-02-09 04:48:20.363804 | orchestrator | 2026-02-09 04:48:20.410357 | orchestrator | 2026-02-09 04:48:20.410437 | orchestrator | # Ceph versions 2026-02-09 04:48:20.410449 | orchestrator | 2026-02-09 04:48:20.410459 | orchestrator | + echo 2026-02-09 04:48:20.410469 | orchestrator | + echo '# Ceph versions' 2026-02-09 04:48:20.410574 | orchestrator | + echo 2026-02-09 04:48:20.410586 | orchestrator | + ceph versions 2026-02-09 04:48:21.029965 | orchestrator | { 2026-02-09 04:48:21.030115 | orchestrator | "mon": { 2026-02-09 04:48:21.030134 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-09 04:48:21.030147 | orchestrator | }, 2026-02-09 04:48:21.030159 | orchestrator | "mgr": { 2026-02-09 04:48:21.030170 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-09 04:48:21.030181 | orchestrator | }, 2026-02-09 04:48:21.030192 | orchestrator | "osd": { 2026-02-09 04:48:21.030203 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-09 04:48:21.030214 | orchestrator | }, 2026-02-09 04:48:21.030225 | orchestrator | "mds": { 2026-02-09 04:48:21.030236 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-09 04:48:21.030246 | orchestrator | }, 2026-02-09 04:48:21.030257 | orchestrator | "rgw": { 2026-02-09 04:48:21.030268 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-09 04:48:21.030279 | orchestrator | }, 2026-02-09 04:48:21.030290 | orchestrator | "overall": { 2026-02-09 04:48:21.030302 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-09 04:48:21.030313 | orchestrator | } 2026-02-09 04:48:21.030324 | orchestrator | } 2026-02-09 04:48:21.087986 | orchestrator | 2026-02-09 04:48:21.088097 | orchestrator | # Ceph OSD tree 2026-02-09 04:48:21.088110 | orchestrator | 2026-02-09 04:48:21.088121 | orchestrator | + echo 2026-02-09 04:48:21.088131 | orchestrator | + echo '# Ceph OSD tree' 2026-02-09 04:48:21.088142 | orchestrator | + echo 2026-02-09 04:48:21.088187 | orchestrator | + ceph osd df tree 2026-02-09 04:48:21.620061 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-09 04:48:21.620203 | orchestrator | -1 0.11691 - 120 GiB 7.0 GiB 6.7 GiB 6 KiB 373 MiB 113 GiB 5.87 1.00 - root default 2026-02-09 04:48:21.620230 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-02-09 04:48:21.620249 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.0 GiB 1011 MiB 1 KiB 62 MiB 19 GiB 5.24 0.89 212 up osd.1 2026-02-09 04:48:21.620268 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.49 1.11 178 up osd.3 2026-02-09 04:48:21.620286 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-4 2026-02-09 04:48:21.620303 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 62 MiB 18 GiB 7.55 1.29 191 up osd.0 2026-02-09 04:48:21.620356 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 856 MiB 795 MiB 1 KiB 62 MiB 19 GiB 4.19 0.71 197 up osd.5 2026-02-09 04:48:21.620375 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-5 2026-02-09 04:48:21.620394 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 66 MiB 19 GiB 7.22 1.23 203 up osd.2 2026-02-09 04:48:21.620414 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 928 MiB 867 MiB 1 KiB 62 MiB 19 GiB 4.54 0.77 189 up osd.4 2026-02-09 04:48:21.620430 | orchestrator | TOTAL 120 GiB 7.0 GiB 6.7 GiB 9.3 KiB 373 MiB 113 GiB 5.87 2026-02-09 04:48:21.620448 | orchestrator | MIN/MAX VAR: 0.71/1.29 STDDEV: 1.29 2026-02-09 04:48:21.670727 | orchestrator | 2026-02-09 04:48:21.670823 | orchestrator | # Ceph monitor status 2026-02-09 04:48:21.670848 | orchestrator | 2026-02-09 04:48:21.670869 | orchestrator | + echo 2026-02-09 04:48:21.670890 | orchestrator | + echo '# Ceph monitor status' 2026-02-09 04:48:21.670911 | orchestrator | + echo 2026-02-09 04:48:21.670933 | orchestrator | + ceph mon stat 2026-02-09 04:48:22.294206 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-2, quorum 0,1,2 testbed-node-2,testbed-node-0,testbed-node-1 2026-02-09 04:48:22.337809 | orchestrator | 2026-02-09 04:48:22.337896 | orchestrator | # Ceph quorum status 2026-02-09 04:48:22.337908 | orchestrator | 2026-02-09 04:48:22.337915 | orchestrator | + echo 2026-02-09 04:48:22.337923 | orchestrator | + echo '# Ceph quorum status' 2026-02-09 04:48:22.337931 | orchestrator | + echo 2026-02-09 04:48:22.337947 | orchestrator | + ceph quorum_status 2026-02-09 04:48:22.338127 | orchestrator | + jq 2026-02-09 04:48:22.986513 | orchestrator | { 2026-02-09 04:48:22.986803 | orchestrator | "election_epoch": 4, 2026-02-09 04:48:22.986823 | orchestrator | "quorum": [ 2026-02-09 04:48:22.986831 | orchestrator | 0, 2026-02-09 04:48:22.986838 | orchestrator | 1, 2026-02-09 04:48:22.986845 | orchestrator | 2 2026-02-09 04:48:22.986852 | orchestrator | ], 2026-02-09 04:48:22.986859 | orchestrator | "quorum_names": [ 2026-02-09 04:48:22.986866 | orchestrator | "testbed-node-2", 2026-02-09 04:48:22.986873 | orchestrator | "testbed-node-0", 2026-02-09 04:48:22.986879 | orchestrator | "testbed-node-1" 2026-02-09 04:48:22.986886 | orchestrator | ], 2026-02-09 04:48:22.986893 | orchestrator | "quorum_leader_name": "testbed-node-2", 2026-02-09 04:48:22.986900 | orchestrator | "quorum_age": 4086, 2026-02-09 04:48:22.986907 | orchestrator | "features": { 2026-02-09 04:48:22.986914 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-09 04:48:22.986920 | orchestrator | "quorum_mon": [ 2026-02-09 04:48:22.986927 | orchestrator | "kraken", 2026-02-09 04:48:22.986933 | orchestrator | "luminous", 2026-02-09 04:48:22.986940 | orchestrator | "mimic", 2026-02-09 04:48:22.986947 | orchestrator | "osdmap-prune", 2026-02-09 04:48:22.986953 | orchestrator | "nautilus", 2026-02-09 04:48:22.986960 | orchestrator | "octopus", 2026-02-09 04:48:22.986966 | orchestrator | "pacific", 2026-02-09 04:48:22.986973 | orchestrator | "elector-pinging", 2026-02-09 04:48:22.986979 | orchestrator | "quincy", 2026-02-09 04:48:22.986986 | orchestrator | "reef" 2026-02-09 04:48:22.986993 | orchestrator | ] 2026-02-09 04:48:22.986999 | orchestrator | }, 2026-02-09 04:48:22.987006 | orchestrator | "monmap": { 2026-02-09 04:48:22.987012 | orchestrator | "epoch": 1, 2026-02-09 04:48:22.987019 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-09 04:48:22.987027 | orchestrator | "modified": "2026-02-09T03:40:05.024258Z", 2026-02-09 04:48:22.987034 | orchestrator | "created": "2026-02-09T03:40:05.024258Z", 2026-02-09 04:48:22.987040 | orchestrator | "min_mon_release": 18, 2026-02-09 04:48:22.987047 | orchestrator | "min_mon_release_name": "reef", 2026-02-09 04:48:22.987054 | orchestrator | "election_strategy": 1, 2026-02-09 04:48:22.987060 | orchestrator | "disallowed_leaders: ": "", 2026-02-09 04:48:22.987067 | orchestrator | "stretch_mode": false, 2026-02-09 04:48:22.987073 | orchestrator | "tiebreaker_mon": "", 2026-02-09 04:48:22.987080 | orchestrator | "removed_ranks: ": "", 2026-02-09 04:48:22.987086 | orchestrator | "features": { 2026-02-09 04:48:22.987093 | orchestrator | "persistent": [ 2026-02-09 04:48:22.987099 | orchestrator | "kraken", 2026-02-09 04:48:22.988436 | orchestrator | "luminous", 2026-02-09 04:48:22.988456 | orchestrator | "mimic", 2026-02-09 04:48:22.988463 | orchestrator | "osdmap-prune", 2026-02-09 04:48:22.988470 | orchestrator | "nautilus", 2026-02-09 04:48:22.988476 | orchestrator | "octopus", 2026-02-09 04:48:22.988483 | orchestrator | "pacific", 2026-02-09 04:48:22.988490 | orchestrator | "elector-pinging", 2026-02-09 04:48:22.988496 | orchestrator | "quincy", 2026-02-09 04:48:22.988503 | orchestrator | "reef" 2026-02-09 04:48:22.988510 | orchestrator | ], 2026-02-09 04:48:22.988517 | orchestrator | "optional": [] 2026-02-09 04:48:22.988523 | orchestrator | }, 2026-02-09 04:48:22.988530 | orchestrator | "mons": [ 2026-02-09 04:48:22.988551 | orchestrator | { 2026-02-09 04:48:22.988558 | orchestrator | "rank": 0, 2026-02-09 04:48:22.988565 | orchestrator | "name": "testbed-node-2", 2026-02-09 04:48:22.988572 | orchestrator | "public_addrs": { 2026-02-09 04:48:22.988578 | orchestrator | "addrvec": [ 2026-02-09 04:48:22.988585 | orchestrator | { 2026-02-09 04:48:22.988592 | orchestrator | "type": "v2", 2026-02-09 04:48:22.988599 | orchestrator | "addr": "192.168.16.8:3300", 2026-02-09 04:48:22.988607 | orchestrator | "nonce": 0 2026-02-09 04:48:22.988613 | orchestrator | }, 2026-02-09 04:48:22.988620 | orchestrator | { 2026-02-09 04:48:22.988627 | orchestrator | "type": "v1", 2026-02-09 04:48:22.988633 | orchestrator | "addr": "192.168.16.8:6789", 2026-02-09 04:48:22.988640 | orchestrator | "nonce": 0 2026-02-09 04:48:22.988646 | orchestrator | } 2026-02-09 04:48:22.988704 | orchestrator | ] 2026-02-09 04:48:22.988711 | orchestrator | }, 2026-02-09 04:48:22.988718 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-02-09 04:48:22.988725 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-02-09 04:48:22.988731 | orchestrator | "priority": 0, 2026-02-09 04:48:22.988738 | orchestrator | "weight": 0, 2026-02-09 04:48:22.988745 | orchestrator | "crush_location": "{}" 2026-02-09 04:48:22.988751 | orchestrator | }, 2026-02-09 04:48:22.988758 | orchestrator | { 2026-02-09 04:48:22.988765 | orchestrator | "rank": 1, 2026-02-09 04:48:22.988771 | orchestrator | "name": "testbed-node-0", 2026-02-09 04:48:22.988778 | orchestrator | "public_addrs": { 2026-02-09 04:48:22.988784 | orchestrator | "addrvec": [ 2026-02-09 04:48:22.988791 | orchestrator | { 2026-02-09 04:48:22.988797 | orchestrator | "type": "v2", 2026-02-09 04:48:22.988804 | orchestrator | "addr": "192.168.16.10:3300", 2026-02-09 04:48:22.988811 | orchestrator | "nonce": 0 2026-02-09 04:48:22.988817 | orchestrator | }, 2026-02-09 04:48:22.988824 | orchestrator | { 2026-02-09 04:48:22.988830 | orchestrator | "type": "v1", 2026-02-09 04:48:22.988837 | orchestrator | "addr": "192.168.16.10:6789", 2026-02-09 04:48:22.988844 | orchestrator | "nonce": 0 2026-02-09 04:48:22.988851 | orchestrator | } 2026-02-09 04:48:22.988857 | orchestrator | ] 2026-02-09 04:48:22.988864 | orchestrator | }, 2026-02-09 04:48:22.988870 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-02-09 04:48:22.988877 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-02-09 04:48:22.988884 | orchestrator | "priority": 0, 2026-02-09 04:48:22.988890 | orchestrator | "weight": 0, 2026-02-09 04:48:22.988897 | orchestrator | "crush_location": "{}" 2026-02-09 04:48:22.988903 | orchestrator | }, 2026-02-09 04:48:22.988910 | orchestrator | { 2026-02-09 04:48:22.988917 | orchestrator | "rank": 2, 2026-02-09 04:48:22.988923 | orchestrator | "name": "testbed-node-1", 2026-02-09 04:48:22.988930 | orchestrator | "public_addrs": { 2026-02-09 04:48:22.988937 | orchestrator | "addrvec": [ 2026-02-09 04:48:22.988943 | orchestrator | { 2026-02-09 04:48:22.988950 | orchestrator | "type": "v2", 2026-02-09 04:48:22.988956 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-09 04:48:22.988963 | orchestrator | "nonce": 0 2026-02-09 04:48:22.988970 | orchestrator | }, 2026-02-09 04:48:22.988976 | orchestrator | { 2026-02-09 04:48:22.988983 | orchestrator | "type": "v1", 2026-02-09 04:48:22.988989 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-09 04:48:22.988996 | orchestrator | "nonce": 0 2026-02-09 04:48:22.989003 | orchestrator | } 2026-02-09 04:48:22.989010 | orchestrator | ] 2026-02-09 04:48:22.989016 | orchestrator | }, 2026-02-09 04:48:22.989023 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-09 04:48:22.989030 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-09 04:48:22.989036 | orchestrator | "priority": 0, 2026-02-09 04:48:22.989051 | orchestrator | "weight": 0, 2026-02-09 04:48:22.989057 | orchestrator | "crush_location": "{}" 2026-02-09 04:48:22.989064 | orchestrator | } 2026-02-09 04:48:22.989071 | orchestrator | ] 2026-02-09 04:48:22.989077 | orchestrator | } 2026-02-09 04:48:22.989084 | orchestrator | } 2026-02-09 04:48:22.989104 | orchestrator | 2026-02-09 04:48:22.989111 | orchestrator | + echo 2026-02-09 04:48:22.989118 | orchestrator | + echo '# Ceph free space status' 2026-02-09 04:48:22.989124 | orchestrator | # Ceph free space status 2026-02-09 04:48:22.989131 | orchestrator | 2026-02-09 04:48:22.989138 | orchestrator | + echo 2026-02-09 04:48:22.989144 | orchestrator | + ceph df 2026-02-09 04:48:23.656958 | orchestrator | --- RAW STORAGE --- 2026-02-09 04:48:23.657052 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-09 04:48:23.657078 | orchestrator | hdd 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-02-09 04:48:23.657100 | orchestrator | TOTAL 120 GiB 113 GiB 7.0 GiB 7.0 GiB 5.87 2026-02-09 04:48:23.657111 | orchestrator | 2026-02-09 04:48:23.657123 | orchestrator | --- POOLS --- 2026-02-09 04:48:23.657134 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-09 04:48:23.657147 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-02-09 04:48:23.657158 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-09 04:48:23.657168 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-09 04:48:23.657179 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-09 04:48:23.657189 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-09 04:48:23.657202 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-09 04:48:23.657212 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-09 04:48:23.657223 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-09 04:48:23.657233 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-02-09 04:48:23.657244 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-09 04:48:23.657254 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-09 04:48:23.657265 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2026-02-09 04:48:23.657293 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-09 04:48:23.657304 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-09 04:48:23.713968 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-09 04:48:23.769947 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-09 04:48:23.770138 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-09 04:48:23.770170 | orchestrator | + osism apply facts 2026-02-09 04:48:35.944893 | orchestrator | 2026-02-09 04:48:35 | INFO  | Task 961b851e-9fef-4d10-bad3-531ce0c36c9c (facts) was prepared for execution. 2026-02-09 04:48:35.945026 | orchestrator | 2026-02-09 04:48:35 | INFO  | It takes a moment until task 961b851e-9fef-4d10-bad3-531ce0c36c9c (facts) has been started and output is visible here. 2026-02-09 04:48:50.943013 | orchestrator | 2026-02-09 04:48:50.943161 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-09 04:48:50.943177 | orchestrator | 2026-02-09 04:48:50.943187 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-09 04:48:50.943198 | orchestrator | Monday 09 February 2026 04:48:41 +0000 (0:00:00.297) 0:00:00.297 ******* 2026-02-09 04:48:50.943208 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:48:50.943219 | orchestrator | ok: [testbed-manager] 2026-02-09 04:48:50.943229 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:48:50.943238 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:48:50.943248 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:48:50.943257 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:48:50.943267 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:48:50.943278 | orchestrator | 2026-02-09 04:48:50.943296 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-09 04:48:50.943350 | orchestrator | Monday 09 February 2026 04:48:42 +0000 (0:00:01.249) 0:00:01.547 ******* 2026-02-09 04:48:50.943369 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:48:50.943381 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:48:50.943390 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:48:50.943400 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:48:50.943409 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:48:50.943419 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:48:50.943429 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:48:50.943438 | orchestrator | 2026-02-09 04:48:50.943448 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-09 04:48:50.943458 | orchestrator | 2026-02-09 04:48:50.943467 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-09 04:48:50.943477 | orchestrator | Monday 09 February 2026 04:48:44 +0000 (0:00:01.616) 0:00:03.163 ******* 2026-02-09 04:48:50.943486 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:48:50.943496 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:48:50.943505 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:48:50.943515 | orchestrator | ok: [testbed-manager] 2026-02-09 04:48:50.943525 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:48:50.943536 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:48:50.943547 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:48:50.943558 | orchestrator | 2026-02-09 04:48:50.943569 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-09 04:48:50.943580 | orchestrator | 2026-02-09 04:48:50.943591 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-09 04:48:50.943602 | orchestrator | Monday 09 February 2026 04:48:49 +0000 (0:00:05.705) 0:00:08.868 ******* 2026-02-09 04:48:50.943614 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:48:50.943625 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:48:50.943636 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:48:50.943647 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:48:50.943658 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:48:50.943670 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:48:50.943681 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:48:50.943719 | orchestrator | 2026-02-09 04:48:50.943732 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:48:50.943744 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:48:50.943755 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:48:50.943767 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:48:50.943797 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:48:50.943809 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:48:50.943821 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:48:50.943832 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:48:50.943842 | orchestrator | 2026-02-09 04:48:50.943852 | orchestrator | 2026-02-09 04:48:50.943861 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:48:50.943872 | orchestrator | Monday 09 February 2026 04:48:50 +0000 (0:00:00.637) 0:00:09.506 ******* 2026-02-09 04:48:50.943882 | orchestrator | =============================================================================== 2026-02-09 04:48:50.943893 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.71s 2026-02-09 04:48:50.943913 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.62s 2026-02-09 04:48:50.943924 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.25s 2026-02-09 04:48:50.943935 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.64s 2026-02-09 04:48:51.323550 | orchestrator | + osism validate ceph-mons 2026-02-09 04:49:15.776334 | orchestrator | 2026-02-09 04:49:15.776522 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-09 04:49:15.776546 | orchestrator | 2026-02-09 04:49:15.776559 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-09 04:49:15.776571 | orchestrator | Monday 09 February 2026 04:48:58 +0000 (0:00:00.505) 0:00:00.505 ******* 2026-02-09 04:49:15.776583 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:15.776594 | orchestrator | 2026-02-09 04:49:15.776605 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-09 04:49:15.776616 | orchestrator | Monday 09 February 2026 04:49:00 +0000 (0:00:01.043) 0:00:01.549 ******* 2026-02-09 04:49:15.776627 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:15.776637 | orchestrator | 2026-02-09 04:49:15.776648 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-09 04:49:15.776659 | orchestrator | Monday 09 February 2026 04:49:01 +0000 (0:00:01.140) 0:00:02.689 ******* 2026-02-09 04:49:15.776670 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.776682 | orchestrator | 2026-02-09 04:49:15.776693 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-09 04:49:15.776704 | orchestrator | Monday 09 February 2026 04:49:01 +0000 (0:00:00.154) 0:00:02.844 ******* 2026-02-09 04:49:15.776715 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.776973 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:49:15.777105 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:49:15.777122 | orchestrator | 2026-02-09 04:49:15.777137 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-09 04:49:15.777150 | orchestrator | Monday 09 February 2026 04:49:01 +0000 (0:00:00.341) 0:00:03.185 ******* 2026-02-09 04:49:15.777161 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:49:15.777172 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.777183 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:49:15.777194 | orchestrator | 2026-02-09 04:49:15.777205 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-09 04:49:15.777216 | orchestrator | Monday 09 February 2026 04:49:02 +0000 (0:00:00.995) 0:00:04.181 ******* 2026-02-09 04:49:15.777227 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.777240 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:49:15.777250 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:49:15.777261 | orchestrator | 2026-02-09 04:49:15.777272 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-09 04:49:15.777283 | orchestrator | Monday 09 February 2026 04:49:02 +0000 (0:00:00.300) 0:00:04.482 ******* 2026-02-09 04:49:15.777294 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.777305 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:49:15.777315 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:49:15.777326 | orchestrator | 2026-02-09 04:49:15.777337 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-09 04:49:15.777347 | orchestrator | Monday 09 February 2026 04:49:03 +0000 (0:00:00.557) 0:00:05.039 ******* 2026-02-09 04:49:15.777358 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.777369 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:49:15.777379 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:49:15.777390 | orchestrator | 2026-02-09 04:49:15.777400 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-09 04:49:15.777411 | orchestrator | Monday 09 February 2026 04:49:03 +0000 (0:00:00.342) 0:00:05.382 ******* 2026-02-09 04:49:15.777422 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.777477 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:49:15.777489 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:49:15.777499 | orchestrator | 2026-02-09 04:49:15.777510 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-09 04:49:15.777521 | orchestrator | Monday 09 February 2026 04:49:04 +0000 (0:00:00.337) 0:00:05.719 ******* 2026-02-09 04:49:15.777531 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.777541 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:49:15.777552 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:49:15.777563 | orchestrator | 2026-02-09 04:49:15.777574 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-09 04:49:15.777584 | orchestrator | Monday 09 February 2026 04:49:04 +0000 (0:00:00.526) 0:00:06.246 ******* 2026-02-09 04:49:15.777595 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.777606 | orchestrator | 2026-02-09 04:49:15.777616 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-09 04:49:15.777627 | orchestrator | Monday 09 February 2026 04:49:04 +0000 (0:00:00.261) 0:00:06.507 ******* 2026-02-09 04:49:15.777638 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.777648 | orchestrator | 2026-02-09 04:49:15.777659 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-09 04:49:15.777670 | orchestrator | Monday 09 February 2026 04:49:05 +0000 (0:00:00.316) 0:00:06.823 ******* 2026-02-09 04:49:15.777680 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.777691 | orchestrator | 2026-02-09 04:49:15.777701 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:15.777712 | orchestrator | Monday 09 February 2026 04:49:05 +0000 (0:00:00.288) 0:00:07.112 ******* 2026-02-09 04:49:15.777722 | orchestrator | 2026-02-09 04:49:15.777762 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:15.777773 | orchestrator | Monday 09 February 2026 04:49:05 +0000 (0:00:00.091) 0:00:07.203 ******* 2026-02-09 04:49:15.777784 | orchestrator | 2026-02-09 04:49:15.777794 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:15.777805 | orchestrator | Monday 09 February 2026 04:49:05 +0000 (0:00:00.073) 0:00:07.276 ******* 2026-02-09 04:49:15.777815 | orchestrator | 2026-02-09 04:49:15.777826 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-09 04:49:15.777837 | orchestrator | Monday 09 February 2026 04:49:05 +0000 (0:00:00.091) 0:00:07.368 ******* 2026-02-09 04:49:15.777847 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.777858 | orchestrator | 2026-02-09 04:49:15.777868 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-09 04:49:15.777902 | orchestrator | Monday 09 February 2026 04:49:06 +0000 (0:00:00.315) 0:00:07.684 ******* 2026-02-09 04:49:15.777913 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.777924 | orchestrator | 2026-02-09 04:49:15.777969 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-09 04:49:15.777982 | orchestrator | Monday 09 February 2026 04:49:06 +0000 (0:00:00.317) 0:00:08.001 ******* 2026-02-09 04:49:15.777993 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.778004 | orchestrator | 2026-02-09 04:49:15.778086 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-09 04:49:15.778101 | orchestrator | Monday 09 February 2026 04:49:06 +0000 (0:00:00.141) 0:00:08.143 ******* 2026-02-09 04:49:15.778125 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:49:15.778141 | orchestrator | 2026-02-09 04:49:15.778153 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-09 04:49:15.778163 | orchestrator | Monday 09 February 2026 04:49:08 +0000 (0:00:01.536) 0:00:09.679 ******* 2026-02-09 04:49:15.778174 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.778185 | orchestrator | 2026-02-09 04:49:15.778196 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-09 04:49:15.778207 | orchestrator | Monday 09 February 2026 04:49:08 +0000 (0:00:00.534) 0:00:10.214 ******* 2026-02-09 04:49:15.778228 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.778238 | orchestrator | 2026-02-09 04:49:15.778249 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-09 04:49:15.778260 | orchestrator | Monday 09 February 2026 04:49:08 +0000 (0:00:00.155) 0:00:10.369 ******* 2026-02-09 04:49:15.778270 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.778281 | orchestrator | 2026-02-09 04:49:15.778291 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-09 04:49:15.778302 | orchestrator | Monday 09 February 2026 04:49:09 +0000 (0:00:00.352) 0:00:10.722 ******* 2026-02-09 04:49:15.778312 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.778323 | orchestrator | 2026-02-09 04:49:15.778334 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-09 04:49:15.778344 | orchestrator | Monday 09 February 2026 04:49:09 +0000 (0:00:00.307) 0:00:11.029 ******* 2026-02-09 04:49:15.778355 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.778366 | orchestrator | 2026-02-09 04:49:15.778377 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-09 04:49:15.778387 | orchestrator | Monday 09 February 2026 04:49:09 +0000 (0:00:00.135) 0:00:11.164 ******* 2026-02-09 04:49:15.778398 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.778408 | orchestrator | 2026-02-09 04:49:15.778419 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-09 04:49:15.778429 | orchestrator | Monday 09 February 2026 04:49:09 +0000 (0:00:00.135) 0:00:11.300 ******* 2026-02-09 04:49:15.778440 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.778451 | orchestrator | 2026-02-09 04:49:15.778461 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-09 04:49:15.778472 | orchestrator | Monday 09 February 2026 04:49:09 +0000 (0:00:00.118) 0:00:11.418 ******* 2026-02-09 04:49:15.778482 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:49:15.778493 | orchestrator | 2026-02-09 04:49:15.778503 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-09 04:49:15.778514 | orchestrator | Monday 09 February 2026 04:49:11 +0000 (0:00:01.229) 0:00:12.648 ******* 2026-02-09 04:49:15.778525 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.778536 | orchestrator | 2026-02-09 04:49:15.778546 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-09 04:49:15.778557 | orchestrator | Monday 09 February 2026 04:49:11 +0000 (0:00:00.328) 0:00:12.976 ******* 2026-02-09 04:49:15.778567 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.778578 | orchestrator | 2026-02-09 04:49:15.778588 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-09 04:49:15.778599 | orchestrator | Monday 09 February 2026 04:49:11 +0000 (0:00:00.159) 0:00:13.136 ******* 2026-02-09 04:49:15.778609 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:15.778620 | orchestrator | 2026-02-09 04:49:15.778630 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-09 04:49:15.778641 | orchestrator | Monday 09 February 2026 04:49:11 +0000 (0:00:00.163) 0:00:13.300 ******* 2026-02-09 04:49:15.778652 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.778663 | orchestrator | 2026-02-09 04:49:15.778673 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-09 04:49:15.778684 | orchestrator | Monday 09 February 2026 04:49:11 +0000 (0:00:00.158) 0:00:13.458 ******* 2026-02-09 04:49:15.778701 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.778712 | orchestrator | 2026-02-09 04:49:15.778722 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-09 04:49:15.778774 | orchestrator | Monday 09 February 2026 04:49:12 +0000 (0:00:00.356) 0:00:13.815 ******* 2026-02-09 04:49:15.778785 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:15.778796 | orchestrator | 2026-02-09 04:49:15.778807 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-09 04:49:15.778817 | orchestrator | Monday 09 February 2026 04:49:12 +0000 (0:00:00.296) 0:00:14.112 ******* 2026-02-09 04:49:15.778835 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:15.778846 | orchestrator | 2026-02-09 04:49:15.778856 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-09 04:49:15.778867 | orchestrator | Monday 09 February 2026 04:49:12 +0000 (0:00:00.287) 0:00:14.400 ******* 2026-02-09 04:49:15.778877 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:15.778888 | orchestrator | 2026-02-09 04:49:15.778899 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-09 04:49:15.778910 | orchestrator | Monday 09 February 2026 04:49:14 +0000 (0:00:01.950) 0:00:16.350 ******* 2026-02-09 04:49:15.778920 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:15.778931 | orchestrator | 2026-02-09 04:49:15.778942 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-09 04:49:15.778952 | orchestrator | Monday 09 February 2026 04:49:15 +0000 (0:00:00.311) 0:00:16.662 ******* 2026-02-09 04:49:15.778963 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:15.778973 | orchestrator | 2026-02-09 04:49:15.778995 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:18.806382 | orchestrator | Monday 09 February 2026 04:49:15 +0000 (0:00:00.311) 0:00:16.973 ******* 2026-02-09 04:49:18.806529 | orchestrator | 2026-02-09 04:49:18.806547 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:18.806559 | orchestrator | Monday 09 February 2026 04:49:15 +0000 (0:00:00.151) 0:00:17.125 ******* 2026-02-09 04:49:18.806571 | orchestrator | 2026-02-09 04:49:18.806583 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:18.806594 | orchestrator | Monday 09 February 2026 04:49:15 +0000 (0:00:00.077) 0:00:17.203 ******* 2026-02-09 04:49:18.806605 | orchestrator | 2026-02-09 04:49:18.806615 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-09 04:49:18.806626 | orchestrator | Monday 09 February 2026 04:49:15 +0000 (0:00:00.080) 0:00:17.283 ******* 2026-02-09 04:49:18.806638 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:18.806649 | orchestrator | 2026-02-09 04:49:18.806660 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-09 04:49:18.806670 | orchestrator | Monday 09 February 2026 04:49:17 +0000 (0:00:01.711) 0:00:18.995 ******* 2026-02-09 04:49:18.806681 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-09 04:49:18.806692 | orchestrator |  "msg": [ 2026-02-09 04:49:18.806706 | orchestrator |  "Validator run completed.", 2026-02-09 04:49:18.806718 | orchestrator |  "You can find the report file here:", 2026-02-09 04:49:18.806758 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-09T04:48:59+00:00-report.json", 2026-02-09 04:49:18.806771 | orchestrator |  "on the following host:", 2026-02-09 04:49:18.806783 | orchestrator |  "testbed-manager" 2026-02-09 04:49:18.806794 | orchestrator |  ] 2026-02-09 04:49:18.806805 | orchestrator | } 2026-02-09 04:49:18.806816 | orchestrator | 2026-02-09 04:49:18.806827 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:49:18.806839 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-09 04:49:18.806853 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:49:18.806865 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:49:18.806876 | orchestrator | 2026-02-09 04:49:18.806886 | orchestrator | 2026-02-09 04:49:18.806897 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:49:18.806908 | orchestrator | Monday 09 February 2026 04:49:18 +0000 (0:00:00.939) 0:00:19.934 ******* 2026-02-09 04:49:18.806948 | orchestrator | =============================================================================== 2026-02-09 04:49:18.806960 | orchestrator | Aggregate test results step one ----------------------------------------- 1.95s 2026-02-09 04:49:18.806970 | orchestrator | Write report file ------------------------------------------------------- 1.71s 2026-02-09 04:49:18.806981 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.54s 2026-02-09 04:49:18.806992 | orchestrator | Gather status data ------------------------------------------------------ 1.23s 2026-02-09 04:49:18.807003 | orchestrator | Create report output directory ------------------------------------------ 1.14s 2026-02-09 04:49:18.807013 | orchestrator | Get timestamp for report file ------------------------------------------- 1.04s 2026-02-09 04:49:18.807024 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2026-02-09 04:49:18.807035 | orchestrator | Print report file information ------------------------------------------- 0.94s 2026-02-09 04:49:18.807045 | orchestrator | Set test result to passed if container is existing ---------------------- 0.56s 2026-02-09 04:49:18.807056 | orchestrator | Set quorum test data ---------------------------------------------------- 0.53s 2026-02-09 04:49:18.807084 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.53s 2026-02-09 04:49:18.807095 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.36s 2026-02-09 04:49:18.807106 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.35s 2026-02-09 04:49:18.807117 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-02-09 04:49:18.807128 | orchestrator | Prepare test data for container existance test -------------------------- 0.34s 2026-02-09 04:49:18.807138 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.34s 2026-02-09 04:49:18.807149 | orchestrator | Set health test data ---------------------------------------------------- 0.33s 2026-02-09 04:49:18.807160 | orchestrator | Fail due to missing containers ------------------------------------------ 0.32s 2026-02-09 04:49:18.807170 | orchestrator | Aggregate test results step two ----------------------------------------- 0.32s 2026-02-09 04:49:18.807181 | orchestrator | Print report file information ------------------------------------------- 0.32s 2026-02-09 04:49:19.197883 | orchestrator | + osism validate ceph-mgrs 2026-02-09 04:49:52.384909 | orchestrator | 2026-02-09 04:49:52.385014 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-09 04:49:52.385031 | orchestrator | 2026-02-09 04:49:52.385043 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-09 04:49:52.385077 | orchestrator | Monday 09 February 2026 04:49:36 +0000 (0:00:00.516) 0:00:00.516 ******* 2026-02-09 04:49:52.385090 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:52.385101 | orchestrator | 2026-02-09 04:49:52.385112 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-09 04:49:52.385123 | orchestrator | Monday 09 February 2026 04:49:37 +0000 (0:00:00.923) 0:00:01.440 ******* 2026-02-09 04:49:52.385134 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:52.385145 | orchestrator | 2026-02-09 04:49:52.385156 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-09 04:49:52.385167 | orchestrator | Monday 09 February 2026 04:49:38 +0000 (0:00:01.068) 0:00:02.509 ******* 2026-02-09 04:49:52.385178 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:52.385190 | orchestrator | 2026-02-09 04:49:52.385201 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-09 04:49:52.385211 | orchestrator | Monday 09 February 2026 04:49:38 +0000 (0:00:00.137) 0:00:02.647 ******* 2026-02-09 04:49:52.385222 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:52.385233 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:49:52.385244 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:49:52.385255 | orchestrator | 2026-02-09 04:49:52.385266 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-09 04:49:52.385277 | orchestrator | Monday 09 February 2026 04:49:39 +0000 (0:00:00.319) 0:00:02.967 ******* 2026-02-09 04:49:52.385308 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:49:52.385319 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:52.385330 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:49:52.385348 | orchestrator | 2026-02-09 04:49:52.385367 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-09 04:49:52.385390 | orchestrator | Monday 09 February 2026 04:49:40 +0000 (0:00:01.050) 0:00:04.018 ******* 2026-02-09 04:49:52.385415 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:52.385434 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:49:52.385452 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:49:52.385470 | orchestrator | 2026-02-09 04:49:52.385487 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-09 04:49:52.385505 | orchestrator | Monday 09 February 2026 04:49:40 +0000 (0:00:00.307) 0:00:04.325 ******* 2026-02-09 04:49:52.385523 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:52.385541 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:49:52.385560 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:49:52.385578 | orchestrator | 2026-02-09 04:49:52.385597 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-09 04:49:52.385614 | orchestrator | Monday 09 February 2026 04:49:41 +0000 (0:00:00.569) 0:00:04.895 ******* 2026-02-09 04:49:52.385633 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:52.385651 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:49:52.385667 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:49:52.385678 | orchestrator | 2026-02-09 04:49:52.385689 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-09 04:49:52.385700 | orchestrator | Monday 09 February 2026 04:49:41 +0000 (0:00:00.331) 0:00:05.227 ******* 2026-02-09 04:49:52.385710 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:52.385721 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:49:52.385732 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:49:52.385742 | orchestrator | 2026-02-09 04:49:52.385753 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-09 04:49:52.385764 | orchestrator | Monday 09 February 2026 04:49:41 +0000 (0:00:00.293) 0:00:05.520 ******* 2026-02-09 04:49:52.385802 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:52.385813 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:49:52.385824 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:49:52.385835 | orchestrator | 2026-02-09 04:49:52.385846 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-09 04:49:52.385856 | orchestrator | Monday 09 February 2026 04:49:42 +0000 (0:00:00.553) 0:00:06.073 ******* 2026-02-09 04:49:52.385867 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:52.385877 | orchestrator | 2026-02-09 04:49:52.385888 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-09 04:49:52.385899 | orchestrator | Monday 09 February 2026 04:49:42 +0000 (0:00:00.271) 0:00:06.345 ******* 2026-02-09 04:49:52.385910 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:52.385920 | orchestrator | 2026-02-09 04:49:52.385931 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-09 04:49:52.385942 | orchestrator | Monday 09 February 2026 04:49:42 +0000 (0:00:00.285) 0:00:06.631 ******* 2026-02-09 04:49:52.385952 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:52.385963 | orchestrator | 2026-02-09 04:49:52.385974 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:52.385985 | orchestrator | Monday 09 February 2026 04:49:43 +0000 (0:00:00.286) 0:00:06.918 ******* 2026-02-09 04:49:52.385996 | orchestrator | 2026-02-09 04:49:52.386007 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:52.386118 | orchestrator | Monday 09 February 2026 04:49:43 +0000 (0:00:00.091) 0:00:07.009 ******* 2026-02-09 04:49:52.386132 | orchestrator | 2026-02-09 04:49:52.386144 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:52.386155 | orchestrator | Monday 09 February 2026 04:49:43 +0000 (0:00:00.078) 0:00:07.087 ******* 2026-02-09 04:49:52.386181 | orchestrator | 2026-02-09 04:49:52.386192 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-09 04:49:52.386203 | orchestrator | Monday 09 February 2026 04:49:43 +0000 (0:00:00.084) 0:00:07.172 ******* 2026-02-09 04:49:52.386213 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:52.386224 | orchestrator | 2026-02-09 04:49:52.386235 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-09 04:49:52.386246 | orchestrator | Monday 09 February 2026 04:49:43 +0000 (0:00:00.355) 0:00:07.527 ******* 2026-02-09 04:49:52.386257 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:52.386268 | orchestrator | 2026-02-09 04:49:52.386299 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-09 04:49:52.386311 | orchestrator | Monday 09 February 2026 04:49:44 +0000 (0:00:00.255) 0:00:07.783 ******* 2026-02-09 04:49:52.386322 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:52.386333 | orchestrator | 2026-02-09 04:49:52.386344 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-09 04:49:52.386364 | orchestrator | Monday 09 February 2026 04:49:44 +0000 (0:00:00.134) 0:00:07.917 ******* 2026-02-09 04:49:52.386375 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:49:52.386385 | orchestrator | 2026-02-09 04:49:52.386396 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-09 04:49:52.386407 | orchestrator | Monday 09 February 2026 04:49:46 +0000 (0:00:01.934) 0:00:09.851 ******* 2026-02-09 04:49:52.386417 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:52.386428 | orchestrator | 2026-02-09 04:49:52.386451 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-09 04:49:52.386463 | orchestrator | Monday 09 February 2026 04:49:46 +0000 (0:00:00.584) 0:00:10.436 ******* 2026-02-09 04:49:52.386474 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:52.386484 | orchestrator | 2026-02-09 04:49:52.386495 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-09 04:49:52.386505 | orchestrator | Monday 09 February 2026 04:49:47 +0000 (0:00:00.357) 0:00:10.793 ******* 2026-02-09 04:49:52.386516 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:52.386527 | orchestrator | 2026-02-09 04:49:52.386538 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-09 04:49:52.386548 | orchestrator | Monday 09 February 2026 04:49:47 +0000 (0:00:00.161) 0:00:10.955 ******* 2026-02-09 04:49:52.386559 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:49:52.386570 | orchestrator | 2026-02-09 04:49:52.386580 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-09 04:49:52.386591 | orchestrator | Monday 09 February 2026 04:49:47 +0000 (0:00:00.172) 0:00:11.128 ******* 2026-02-09 04:49:52.386602 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:52.386613 | orchestrator | 2026-02-09 04:49:52.386624 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-09 04:49:52.386634 | orchestrator | Monday 09 February 2026 04:49:47 +0000 (0:00:00.265) 0:00:11.393 ******* 2026-02-09 04:49:52.386645 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:49:52.386656 | orchestrator | 2026-02-09 04:49:52.386666 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-09 04:49:52.386677 | orchestrator | Monday 09 February 2026 04:49:47 +0000 (0:00:00.312) 0:00:11.706 ******* 2026-02-09 04:49:52.386688 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:52.386699 | orchestrator | 2026-02-09 04:49:52.386710 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-09 04:49:52.386720 | orchestrator | Monday 09 February 2026 04:49:49 +0000 (0:00:01.361) 0:00:13.068 ******* 2026-02-09 04:49:52.386731 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:52.386742 | orchestrator | 2026-02-09 04:49:52.386752 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-09 04:49:52.386763 | orchestrator | Monday 09 February 2026 04:49:49 +0000 (0:00:00.313) 0:00:13.382 ******* 2026-02-09 04:49:52.386802 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:52.386814 | orchestrator | 2026-02-09 04:49:52.386825 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:52.386835 | orchestrator | Monday 09 February 2026 04:49:49 +0000 (0:00:00.283) 0:00:13.666 ******* 2026-02-09 04:49:52.386846 | orchestrator | 2026-02-09 04:49:52.386857 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:52.386868 | orchestrator | Monday 09 February 2026 04:49:50 +0000 (0:00:00.089) 0:00:13.755 ******* 2026-02-09 04:49:52.386878 | orchestrator | 2026-02-09 04:49:52.386889 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:49:52.386900 | orchestrator | Monday 09 February 2026 04:49:50 +0000 (0:00:00.079) 0:00:13.835 ******* 2026-02-09 04:49:52.386911 | orchestrator | 2026-02-09 04:49:52.386922 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-09 04:49:52.386932 | orchestrator | Monday 09 February 2026 04:49:50 +0000 (0:00:00.314) 0:00:14.149 ******* 2026-02-09 04:49:52.386943 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-09 04:49:52.386953 | orchestrator | 2026-02-09 04:49:52.386964 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-09 04:49:52.386975 | orchestrator | Monday 09 February 2026 04:49:51 +0000 (0:00:01.506) 0:00:15.655 ******* 2026-02-09 04:49:52.386986 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-09 04:49:52.386997 | orchestrator |  "msg": [ 2026-02-09 04:49:52.387008 | orchestrator |  "Validator run completed.", 2026-02-09 04:49:52.387024 | orchestrator |  "You can find the report file here:", 2026-02-09 04:49:52.387035 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-09T04:49:37+00:00-report.json", 2026-02-09 04:49:52.387048 | orchestrator |  "on the following host:", 2026-02-09 04:49:52.387059 | orchestrator |  "testbed-manager" 2026-02-09 04:49:52.387070 | orchestrator |  ] 2026-02-09 04:49:52.387081 | orchestrator | } 2026-02-09 04:49:52.387092 | orchestrator | 2026-02-09 04:49:52.387102 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:49:52.387114 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-09 04:49:52.387127 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:49:52.387146 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:49:52.802104 | orchestrator | 2026-02-09 04:49:52.802191 | orchestrator | 2026-02-09 04:49:52.802205 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:49:52.802220 | orchestrator | Monday 09 February 2026 04:49:52 +0000 (0:00:00.458) 0:00:16.113 ******* 2026-02-09 04:49:52.802235 | orchestrator | =============================================================================== 2026-02-09 04:49:52.802248 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.93s 2026-02-09 04:49:52.802259 | orchestrator | Write report file ------------------------------------------------------- 1.51s 2026-02-09 04:49:52.802278 | orchestrator | Aggregate test results step one ----------------------------------------- 1.36s 2026-02-09 04:49:52.802295 | orchestrator | Create report output directory ------------------------------------------ 1.07s 2026-02-09 04:49:52.802309 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2026-02-09 04:49:52.802322 | orchestrator | Get timestamp for report file ------------------------------------------- 0.92s 2026-02-09 04:49:52.802336 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.58s 2026-02-09 04:49:52.802350 | orchestrator | Set test result to passed if container is existing ---------------------- 0.57s 2026-02-09 04:49:52.802396 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.55s 2026-02-09 04:49:52.802411 | orchestrator | Flush handlers ---------------------------------------------------------- 0.48s 2026-02-09 04:49:52.802426 | orchestrator | Print report file information ------------------------------------------- 0.46s 2026-02-09 04:49:52.802441 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.36s 2026-02-09 04:49:52.802455 | orchestrator | Print report file information ------------------------------------------- 0.36s 2026-02-09 04:49:52.802464 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-02-09 04:49:52.802472 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-02-09 04:49:52.802480 | orchestrator | Aggregate test results step two ----------------------------------------- 0.31s 2026-02-09 04:49:52.802488 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.31s 2026-02-09 04:49:52.802496 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-02-09 04:49:52.802503 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2026-02-09 04:49:52.802511 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2026-02-09 04:49:53.291850 | orchestrator | + osism validate ceph-osds 2026-02-09 04:50:16.270983 | orchestrator | 2026-02-09 04:50:16.271174 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-09 04:50:16.271191 | orchestrator | 2026-02-09 04:50:16.271204 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-09 04:50:16.271215 | orchestrator | Monday 09 February 2026 04:50:10 +0000 (0:00:00.499) 0:00:00.499 ******* 2026-02-09 04:50:16.271227 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-09 04:50:16.271238 | orchestrator | 2026-02-09 04:50:16.271248 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-09 04:50:16.271259 | orchestrator | Monday 09 February 2026 04:50:11 +0000 (0:00:00.991) 0:00:01.491 ******* 2026-02-09 04:50:16.271270 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-09 04:50:16.271286 | orchestrator | 2026-02-09 04:50:16.271306 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-09 04:50:16.271324 | orchestrator | Monday 09 February 2026 04:50:12 +0000 (0:00:00.629) 0:00:02.120 ******* 2026-02-09 04:50:16.271344 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-09 04:50:16.271366 | orchestrator | 2026-02-09 04:50:16.271387 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-09 04:50:16.271408 | orchestrator | Monday 09 February 2026 04:50:13 +0000 (0:00:00.949) 0:00:03.070 ******* 2026-02-09 04:50:16.271421 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:16.271434 | orchestrator | 2026-02-09 04:50:16.271446 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-09 04:50:16.271457 | orchestrator | Monday 09 February 2026 04:50:13 +0000 (0:00:00.160) 0:00:03.231 ******* 2026-02-09 04:50:16.271468 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:16.271479 | orchestrator | 2026-02-09 04:50:16.271490 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-09 04:50:16.271500 | orchestrator | Monday 09 February 2026 04:50:13 +0000 (0:00:00.150) 0:00:03.381 ******* 2026-02-09 04:50:16.271511 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:16.271524 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:50:16.271537 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:50:16.271550 | orchestrator | 2026-02-09 04:50:16.271580 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-09 04:50:16.271594 | orchestrator | Monday 09 February 2026 04:50:14 +0000 (0:00:00.383) 0:00:03.764 ******* 2026-02-09 04:50:16.271606 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:16.271619 | orchestrator | 2026-02-09 04:50:16.271631 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-09 04:50:16.271670 | orchestrator | Monday 09 February 2026 04:50:14 +0000 (0:00:00.173) 0:00:03.938 ******* 2026-02-09 04:50:16.271684 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:16.271697 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:16.271709 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:16.271722 | orchestrator | 2026-02-09 04:50:16.271735 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-09 04:50:16.271748 | orchestrator | Monday 09 February 2026 04:50:14 +0000 (0:00:00.368) 0:00:04.306 ******* 2026-02-09 04:50:16.271760 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:16.271772 | orchestrator | 2026-02-09 04:50:16.271786 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-09 04:50:16.271798 | orchestrator | Monday 09 February 2026 04:50:15 +0000 (0:00:00.889) 0:00:05.195 ******* 2026-02-09 04:50:16.271838 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:16.271851 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:16.271863 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:16.271876 | orchestrator | 2026-02-09 04:50:16.271888 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-09 04:50:16.271901 | orchestrator | Monday 09 February 2026 04:50:15 +0000 (0:00:00.315) 0:00:05.511 ******* 2026-02-09 04:50:16.271916 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5bee53c4dcfcaac3c40f8ce0244730c2d99af267466c16a22acd98a3fd2c7964', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-09 04:50:16.271932 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8f559dcd2414b961f5c6fc3161e8368b3893144a7477ab1856f42cb45108a990', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-09 04:50:16.271945 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8406aa99b703c696a9becc7f6ce2f9200bbd020332b6ae6d09d4b1718e796089', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-09 04:50:16.271956 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2a6d531a759423cf3b57c56f27babb863aade78c32f267bfe9682a5551bd7c1c', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-09 04:50:16.271967 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bc63910a7d7991ff62786b0468f9fb2c34f4d08d86ed4d44d91d82992c0a6242', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-09 04:50:16.272004 | orchestrator | skipping: [testbed-node-3] => (item={'id': '84c3696fe5e82dbbf547198d5f6272c53bde20c49c985020c75d0518bb74c813', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-09 04:50:16.272016 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4fe11a5b04a573100b2293c799f22ed57ee63f00403d8260764a28ce9c8a96d0', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-09 04:50:16.272027 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9a5e70a6ef2e62d359ddd0d6f655c49a09e5142b4d02eea1e357785828be0c74', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-09 04:50:16.272038 | orchestrator | skipping: [testbed-node-3] => (item={'id': '32f410954b76e8e66311902265cd6da03b5f067457c465a5bc643e54155b8de2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:16.272059 | orchestrator | skipping: [testbed-node-3] => (item={'id': '41da4ec0494cf4e9dce275e5b811b2cef3e2e5e9fa9a7c1af76d98953c160399', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:16.272071 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3afd229059569ba1892335a7d9ae2bd84762c4a564079c133aa97c70b874e93e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:16.272083 | orchestrator | ok: [testbed-node-3] => (item={'id': '9328029c16872b1b33f76fbf8b9bbabd66539ee390b0015475b6f8000461ee4d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-09 04:50:16.272095 | orchestrator | ok: [testbed-node-3] => (item={'id': '194a62bcc4b840d40d93ed096300502a53c0aa268a3c87da10d3cd3e5af6f995', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-09 04:50:16.272106 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0e105cc1131f76cb702da54cbc21ccc5caac241a1b06e16d1b1f0956e13d1135', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:16.272117 | orchestrator | skipping: [testbed-node-3] => (item={'id': '298588383d91e30de4b35f72286c7714e709e25ac36d2e33511286c6ff21bba2', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-09 04:50:16.272128 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8ec029c9b4ed9910f207254eb7b1358e916241cdcd0e1bc0449af226a3f1d0a1', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-09 04:50:16.272139 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0bf7bb02b34e9355fa64734e46b99149ec66f2717b387d4b12be393f0456d788', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-09 04:50:16.272150 | orchestrator | skipping: [testbed-node-3] => (item={'id': '221ca96c6c3053fafce39cd28c54019f2eb3f8ae19ba4af3ce89c284d13fbc91', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-09 04:50:16.272161 | orchestrator | skipping: [testbed-node-3] => (item={'id': '644f5724e40d06b9f5d587ac9fe268c79ffa739186eafa6f2e3d442948b308a9', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-09 04:50:16.272172 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bdbc0983d46cfcc7dbc65f67803ea9251f252faadc4312bdcacfc5ad7e54beb0', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-09 04:50:16.272190 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7eb77b8d4515a2c36b44e0c9fe879c50e345192d278716081e6a1a98b628a1de', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-09 04:50:16.525335 | orchestrator | skipping: [testbed-node-4] => (item={'id': '67dc610854a0f69a54f86beba4f22c1c7cf6ac6152a61e4249d8274c54a039b9', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-09 04:50:16.525475 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'abef980516f6fd8a40d4fef4ba3fb616d860a810adbc95fb34b82a61d50a0161', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-09 04:50:16.525508 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e9c8ba57db1beb8ecca91053aaa8c46918320b1d5922be53588e05195b0a0455', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-09 04:50:16.525521 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4b54da0c2f086c6e0d0f2b0072148c2b24ba3668ad64e46646b701254b58c84b', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-09 04:50:16.525536 | orchestrator | skipping: [testbed-node-4] => (item={'id': '984e090de94aaf4a79cce95c9aea71864717b914cfc031a9e1117df0aceda07a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-09 04:50:16.525546 | orchestrator | skipping: [testbed-node-4] => (item={'id': '33b3a177813775e8a1e9d5afdbd2396bbe0f5f5a83f88dd156a5e0136c004842', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-09 04:50:16.525556 | orchestrator | skipping: [testbed-node-4] => (item={'id': '957cc121c5e5db1f3e82541c169e8102fe24b2c9d845ea347b22b30de78badd2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:16.525568 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fdb71d0037076f240d6e7856a529360ebc4e59160ce848d0d47858c7755d2c59', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:16.525578 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dc004226dc01933bb27203e69e27fb9033b307473c278e097b3125519bc9b8fe', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:16.525590 | orchestrator | ok: [testbed-node-4] => (item={'id': '4d84e94f015ce8c5dde84e947bb798500c3aba21f22bdad86c73f95848deebb3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-09 04:50:16.525601 | orchestrator | ok: [testbed-node-4] => (item={'id': 'ac82efafffab650bd587c663dbee5d79135947b63abb75c88fcf2b84432a110d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-09 04:50:16.525611 | orchestrator | skipping: [testbed-node-4] => (item={'id': '706b1fa5e02bc1aa2e5e58774b3a2e58b324a340056b92b1138db55c9f3cfa9f', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:16.525620 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5f1a2d51b54f38717f8275a9123af6844abd9869c067fafc5e7b46864038726d', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-09 04:50:16.525631 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f5ea2ee041b02716528fe429c8ea41be04eaefaa407eb16d854cfece44c79cdf', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-09 04:50:16.525659 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3f1ac8853d4916e19415bc250094ef6bd68719cbaa358223facda85c4bd5b1a9', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-09 04:50:16.525679 | orchestrator | skipping: [testbed-node-4] => (item={'id': '776cfb30a483672d513b563fe09a68febacc243166ff1b48d405c3f1881b964e', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-09 04:50:16.525689 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ba4412ee5d117459bb1dcb3cf933f73d3ba46ef7c592e539ffd33163c7d8411f', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-09 04:50:16.525700 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0d077df5fc2bd832f916f3a33bcc4575878e830172a575c4a00ed52002fd2a30', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-09 04:50:16.525710 | orchestrator | skipping: [testbed-node-5] => (item={'id': '49f83250712e3d58c43ee6c77ab21ffefacc766a68e609adc92eec7e0260c318', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-09 04:50:16.525724 | orchestrator | skipping: [testbed-node-5] => (item={'id': '68cf60fa32469c7c4134638db5e094777de631a0233c87854a2ff1327a7f83f0', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-09 04:50:16.525734 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'df77d6e3e41c862ec6a91f95d175743e63650ab261124a58cde61e62b1727c7d', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-09 04:50:16.525744 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b8078640fa540073190f014d61d9a9e7ea716bc2c1e67c198c3bb882bb289f85', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-09 04:50:16.525754 | orchestrator | skipping: [testbed-node-5] => (item={'id': '639a24c354d9f28540ed03478647e767813ffe1437259ad9fbd551ae5dbe50ec', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-09 04:50:16.525764 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e2422a0690c4879244b5c8fcf1b620c2055db85c931bedc143d0d197b795363a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-09 04:50:16.525774 | orchestrator | skipping: [testbed-node-5] => (item={'id': '463e437e368a889477132a10f7b79c509a644df0b942406d496e52c8cf9e81f8', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-09 04:50:16.525784 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5cd269091075c34b87f50e1822d2cab9993919d765bbe73efd251067ea65e0e6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:16.525794 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4c8a007aece10ec8cc0e85bc4174b3872987a187cd844261189949637a4ca16d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:16.525828 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b0469e27625390eca514ee088113d39fa42e9ee12042bebaaacd948f887aa511', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:16.525845 | orchestrator | ok: [testbed-node-5] => (item={'id': '0191223387fb621496d761092f245ba39edc206cf1c65591a0609b40498347cd', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-09 04:50:16.525863 | orchestrator | ok: [testbed-node-5] => (item={'id': 'db0198b15ea20522c1a018a72265d3cad2bc13258f80c3fa6351999a013e1379', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-09 04:50:28.673504 | orchestrator | skipping: [testbed-node-5] => (item={'id': '04aa3aecf9ef5d78ccca6a216a9d3b58c5e498df1b9ea285e13a5b9fce4212c8', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-09 04:50:28.673642 | orchestrator | skipping: [testbed-node-5] => (item={'id': '37058f57b032868e1fd17f9a8e1d3b13a6e9edac485074a63793c02fd942da27', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-09 04:50:28.673656 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a232afc493247abf5bc22d9ec3f52d73d4680df5eed166931b0bc1754f969666', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-09 04:50:28.673666 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0fc5fcfe44fb442e608d4e9bf6ccd8dea85f4681e07c7c1fc48e843023372e70', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-09 04:50:28.673693 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bdcdc6affc0525d290a9c4f57b8fa95efb3f247d829f2957cc1d3d432968a9e7', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-09 04:50:28.673702 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eb72dc7c793655a36d4189d7b61a04fa1d8908ddebafa527c91c28ed7fcf3395', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-09 04:50:28.673710 | orchestrator | 2026-02-09 04:50:28.673723 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-09 04:50:28.673739 | orchestrator | Monday 09 February 2026 04:50:16 +0000 (0:00:00.513) 0:00:06.025 ******* 2026-02-09 04:50:28.673752 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:28.673767 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:28.673779 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:28.673789 | orchestrator | 2026-02-09 04:50:28.673801 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-09 04:50:28.673814 | orchestrator | Monday 09 February 2026 04:50:16 +0000 (0:00:00.345) 0:00:06.371 ******* 2026-02-09 04:50:28.673884 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:28.673898 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:50:28.673909 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:50:28.673920 | orchestrator | 2026-02-09 04:50:28.673932 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-09 04:50:28.673943 | orchestrator | Monday 09 February 2026 04:50:17 +0000 (0:00:00.575) 0:00:06.946 ******* 2026-02-09 04:50:28.673956 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:28.673969 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:28.673981 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:28.673994 | orchestrator | 2026-02-09 04:50:28.674005 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-09 04:50:28.674014 | orchestrator | Monday 09 February 2026 04:50:17 +0000 (0:00:00.402) 0:00:07.349 ******* 2026-02-09 04:50:28.674077 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:28.674086 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:28.674119 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:28.674128 | orchestrator | 2026-02-09 04:50:28.674136 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-09 04:50:28.674144 | orchestrator | Monday 09 February 2026 04:50:18 +0000 (0:00:00.330) 0:00:07.680 ******* 2026-02-09 04:50:28.674153 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-09 04:50:28.674163 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-09 04:50:28.674172 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:28.674180 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-09 04:50:28.674188 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-09 04:50:28.674197 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:50:28.674205 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-09 04:50:28.674213 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-09 04:50:28.674222 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:50:28.674230 | orchestrator | 2026-02-09 04:50:28.674239 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-09 04:50:28.674247 | orchestrator | Monday 09 February 2026 04:50:18 +0000 (0:00:00.341) 0:00:08.021 ******* 2026-02-09 04:50:28.674256 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:28.674264 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:28.674272 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:28.674280 | orchestrator | 2026-02-09 04:50:28.674288 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-09 04:50:28.674297 | orchestrator | Monday 09 February 2026 04:50:19 +0000 (0:00:00.645) 0:00:08.667 ******* 2026-02-09 04:50:28.674305 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:28.674334 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:50:28.674342 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:50:28.674349 | orchestrator | 2026-02-09 04:50:28.674356 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-09 04:50:28.674364 | orchestrator | Monday 09 February 2026 04:50:19 +0000 (0:00:00.305) 0:00:08.972 ******* 2026-02-09 04:50:28.674371 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:28.674378 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:50:28.674385 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:50:28.674392 | orchestrator | 2026-02-09 04:50:28.674399 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-09 04:50:28.674406 | orchestrator | Monday 09 February 2026 04:50:19 +0000 (0:00:00.309) 0:00:09.282 ******* 2026-02-09 04:50:28.674414 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:28.674421 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:28.674428 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:28.674435 | orchestrator | 2026-02-09 04:50:28.674442 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-09 04:50:28.674449 | orchestrator | Monday 09 February 2026 04:50:20 +0000 (0:00:00.300) 0:00:09.582 ******* 2026-02-09 04:50:28.674456 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:28.674464 | orchestrator | 2026-02-09 04:50:28.674471 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-09 04:50:28.674478 | orchestrator | Monday 09 February 2026 04:50:20 +0000 (0:00:00.771) 0:00:10.354 ******* 2026-02-09 04:50:28.674485 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:28.674492 | orchestrator | 2026-02-09 04:50:28.674499 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-09 04:50:28.674507 | orchestrator | Monday 09 February 2026 04:50:21 +0000 (0:00:00.307) 0:00:10.662 ******* 2026-02-09 04:50:28.674514 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:28.674521 | orchestrator | 2026-02-09 04:50:28.674528 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:50:28.674542 | orchestrator | Monday 09 February 2026 04:50:21 +0000 (0:00:00.273) 0:00:10.936 ******* 2026-02-09 04:50:28.674551 | orchestrator | 2026-02-09 04:50:28.674563 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:50:28.674576 | orchestrator | Monday 09 February 2026 04:50:21 +0000 (0:00:00.073) 0:00:11.010 ******* 2026-02-09 04:50:28.674588 | orchestrator | 2026-02-09 04:50:28.674600 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:50:28.674611 | orchestrator | Monday 09 February 2026 04:50:21 +0000 (0:00:00.071) 0:00:11.081 ******* 2026-02-09 04:50:28.674623 | orchestrator | 2026-02-09 04:50:28.674635 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-09 04:50:28.674647 | orchestrator | Monday 09 February 2026 04:50:21 +0000 (0:00:00.073) 0:00:11.155 ******* 2026-02-09 04:50:28.674659 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:28.674671 | orchestrator | 2026-02-09 04:50:28.674682 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-09 04:50:28.674695 | orchestrator | Monday 09 February 2026 04:50:21 +0000 (0:00:00.278) 0:00:11.433 ******* 2026-02-09 04:50:28.674707 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:28.674720 | orchestrator | 2026-02-09 04:50:28.674732 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-09 04:50:28.674744 | orchestrator | Monday 09 February 2026 04:50:22 +0000 (0:00:00.261) 0:00:11.695 ******* 2026-02-09 04:50:28.674752 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:28.674759 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:28.674767 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:28.674774 | orchestrator | 2026-02-09 04:50:28.674781 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-09 04:50:28.674788 | orchestrator | Monday 09 February 2026 04:50:22 +0000 (0:00:00.330) 0:00:12.026 ******* 2026-02-09 04:50:28.674795 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:28.674802 | orchestrator | 2026-02-09 04:50:28.674809 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-09 04:50:28.674816 | orchestrator | Monday 09 February 2026 04:50:23 +0000 (0:00:00.837) 0:00:12.863 ******* 2026-02-09 04:50:28.674849 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-09 04:50:28.674857 | orchestrator | 2026-02-09 04:50:28.674864 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-09 04:50:28.674871 | orchestrator | Monday 09 February 2026 04:50:24 +0000 (0:00:01.635) 0:00:14.499 ******* 2026-02-09 04:50:28.674879 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:28.674886 | orchestrator | 2026-02-09 04:50:28.674893 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-09 04:50:28.674900 | orchestrator | Monday 09 February 2026 04:50:25 +0000 (0:00:00.148) 0:00:14.648 ******* 2026-02-09 04:50:28.674907 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:28.674914 | orchestrator | 2026-02-09 04:50:28.674922 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-09 04:50:28.674929 | orchestrator | Monday 09 February 2026 04:50:25 +0000 (0:00:00.401) 0:00:15.050 ******* 2026-02-09 04:50:28.674936 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:28.674943 | orchestrator | 2026-02-09 04:50:28.674951 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-09 04:50:28.674958 | orchestrator | Monday 09 February 2026 04:50:25 +0000 (0:00:00.130) 0:00:15.180 ******* 2026-02-09 04:50:28.674965 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:28.674972 | orchestrator | 2026-02-09 04:50:28.674979 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-09 04:50:28.674986 | orchestrator | Monday 09 February 2026 04:50:25 +0000 (0:00:00.145) 0:00:15.326 ******* 2026-02-09 04:50:28.674994 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:28.675001 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:28.675008 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:28.675024 | orchestrator | 2026-02-09 04:50:28.675032 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-09 04:50:28.675039 | orchestrator | Monday 09 February 2026 04:50:26 +0000 (0:00:00.322) 0:00:15.648 ******* 2026-02-09 04:50:28.675046 | orchestrator | changed: [testbed-node-3] 2026-02-09 04:50:28.675054 | orchestrator | changed: [testbed-node-4] 2026-02-09 04:50:28.675061 | orchestrator | changed: [testbed-node-5] 2026-02-09 04:50:40.916223 | orchestrator | 2026-02-09 04:50:40.916316 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-09 04:50:40.916324 | orchestrator | Monday 09 February 2026 04:50:28 +0000 (0:00:02.525) 0:00:18.174 ******* 2026-02-09 04:50:40.916328 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:40.916333 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:40.916338 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:40.916342 | orchestrator | 2026-02-09 04:50:40.916346 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-09 04:50:40.916350 | orchestrator | Monday 09 February 2026 04:50:28 +0000 (0:00:00.340) 0:00:18.514 ******* 2026-02-09 04:50:40.916354 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:40.916358 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:40.916362 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:40.916365 | orchestrator | 2026-02-09 04:50:40.916369 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-09 04:50:40.916373 | orchestrator | Monday 09 February 2026 04:50:29 +0000 (0:00:00.572) 0:00:19.086 ******* 2026-02-09 04:50:40.916377 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:40.916381 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:50:40.916385 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:50:40.916389 | orchestrator | 2026-02-09 04:50:40.916393 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-09 04:50:40.916396 | orchestrator | Monday 09 February 2026 04:50:29 +0000 (0:00:00.403) 0:00:19.490 ******* 2026-02-09 04:50:40.916400 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:40.916404 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:40.916407 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:40.916411 | orchestrator | 2026-02-09 04:50:40.916415 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-09 04:50:40.916421 | orchestrator | Monday 09 February 2026 04:50:30 +0000 (0:00:00.838) 0:00:20.329 ******* 2026-02-09 04:50:40.916425 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:40.916428 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:50:40.916432 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:50:40.916436 | orchestrator | 2026-02-09 04:50:40.916440 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-09 04:50:40.916444 | orchestrator | Monday 09 February 2026 04:50:31 +0000 (0:00:00.371) 0:00:20.700 ******* 2026-02-09 04:50:40.916448 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:40.916451 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:50:40.916455 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:50:40.916459 | orchestrator | 2026-02-09 04:50:40.916462 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-09 04:50:40.916466 | orchestrator | Monday 09 February 2026 04:50:31 +0000 (0:00:00.390) 0:00:21.091 ******* 2026-02-09 04:50:40.916470 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:40.916473 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:40.916477 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:40.916481 | orchestrator | 2026-02-09 04:50:40.916484 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-09 04:50:40.916488 | orchestrator | Monday 09 February 2026 04:50:32 +0000 (0:00:00.618) 0:00:21.710 ******* 2026-02-09 04:50:40.916492 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:40.916496 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:40.916499 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:40.916503 | orchestrator | 2026-02-09 04:50:40.916507 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-09 04:50:40.916524 | orchestrator | Monday 09 February 2026 04:50:33 +0000 (0:00:01.010) 0:00:22.720 ******* 2026-02-09 04:50:40.916529 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:40.916532 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:40.916536 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:40.916540 | orchestrator | 2026-02-09 04:50:40.916543 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-09 04:50:40.916547 | orchestrator | Monday 09 February 2026 04:50:33 +0000 (0:00:00.378) 0:00:23.098 ******* 2026-02-09 04:50:40.916551 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:40.916554 | orchestrator | skipping: [testbed-node-4] 2026-02-09 04:50:40.916558 | orchestrator | skipping: [testbed-node-5] 2026-02-09 04:50:40.916562 | orchestrator | 2026-02-09 04:50:40.916565 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-09 04:50:40.916569 | orchestrator | Monday 09 February 2026 04:50:33 +0000 (0:00:00.351) 0:00:23.450 ******* 2026-02-09 04:50:40.916573 | orchestrator | ok: [testbed-node-3] 2026-02-09 04:50:40.916576 | orchestrator | ok: [testbed-node-4] 2026-02-09 04:50:40.916580 | orchestrator | ok: [testbed-node-5] 2026-02-09 04:50:40.916584 | orchestrator | 2026-02-09 04:50:40.916587 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-09 04:50:40.916591 | orchestrator | Monday 09 February 2026 04:50:34 +0000 (0:00:00.707) 0:00:24.158 ******* 2026-02-09 04:50:40.916595 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-09 04:50:40.916599 | orchestrator | 2026-02-09 04:50:40.916603 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-09 04:50:40.916607 | orchestrator | Monday 09 February 2026 04:50:34 +0000 (0:00:00.292) 0:00:24.450 ******* 2026-02-09 04:50:40.916610 | orchestrator | skipping: [testbed-node-3] 2026-02-09 04:50:40.916614 | orchestrator | 2026-02-09 04:50:40.916618 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-09 04:50:40.916621 | orchestrator | Monday 09 February 2026 04:50:35 +0000 (0:00:00.319) 0:00:24.769 ******* 2026-02-09 04:50:40.916625 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-09 04:50:40.916629 | orchestrator | 2026-02-09 04:50:40.916632 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-09 04:50:40.916636 | orchestrator | Monday 09 February 2026 04:50:37 +0000 (0:00:01.924) 0:00:26.694 ******* 2026-02-09 04:50:40.916640 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-09 04:50:40.916644 | orchestrator | 2026-02-09 04:50:40.916648 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-09 04:50:40.916651 | orchestrator | Monday 09 February 2026 04:50:37 +0000 (0:00:00.309) 0:00:27.003 ******* 2026-02-09 04:50:40.916655 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-09 04:50:40.916659 | orchestrator | 2026-02-09 04:50:40.916672 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:50:40.916676 | orchestrator | Monday 09 February 2026 04:50:37 +0000 (0:00:00.283) 0:00:27.286 ******* 2026-02-09 04:50:40.916680 | orchestrator | 2026-02-09 04:50:40.916684 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:50:40.916687 | orchestrator | Monday 09 February 2026 04:50:37 +0000 (0:00:00.087) 0:00:27.374 ******* 2026-02-09 04:50:40.916691 | orchestrator | 2026-02-09 04:50:40.916695 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-09 04:50:40.916698 | orchestrator | Monday 09 February 2026 04:50:37 +0000 (0:00:00.080) 0:00:27.454 ******* 2026-02-09 04:50:40.916702 | orchestrator | 2026-02-09 04:50:40.916706 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-09 04:50:40.916709 | orchestrator | Monday 09 February 2026 04:50:38 +0000 (0:00:00.112) 0:00:27.567 ******* 2026-02-09 04:50:40.916713 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-09 04:50:40.916717 | orchestrator | 2026-02-09 04:50:40.916720 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-09 04:50:40.916728 | orchestrator | Monday 09 February 2026 04:50:39 +0000 (0:00:01.758) 0:00:29.326 ******* 2026-02-09 04:50:40.916732 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-09 04:50:40.916735 | orchestrator |  "msg": [ 2026-02-09 04:50:40.916740 | orchestrator |  "Validator run completed.", 2026-02-09 04:50:40.916744 | orchestrator |  "You can find the report file here:", 2026-02-09 04:50:40.916747 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-09T04:50:11+00:00-report.json", 2026-02-09 04:50:40.916754 | orchestrator |  "on the following host:", 2026-02-09 04:50:40.916758 | orchestrator |  "testbed-manager" 2026-02-09 04:50:40.916761 | orchestrator |  ] 2026-02-09 04:50:40.916765 | orchestrator | } 2026-02-09 04:50:40.916769 | orchestrator | 2026-02-09 04:50:40.916773 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:50:40.916778 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-09 04:50:40.916783 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-09 04:50:40.916786 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-09 04:50:40.916790 | orchestrator | 2026-02-09 04:50:40.916794 | orchestrator | 2026-02-09 04:50:40.916798 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:50:40.916802 | orchestrator | Monday 09 February 2026 04:50:40 +0000 (0:00:00.673) 0:00:29.999 ******* 2026-02-09 04:50:40.916874 | orchestrator | =============================================================================== 2026-02-09 04:50:40.916881 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.53s 2026-02-09 04:50:40.916885 | orchestrator | Aggregate test results step one ----------------------------------------- 1.92s 2026-02-09 04:50:40.916890 | orchestrator | Write report file ------------------------------------------------------- 1.76s 2026-02-09 04:50:40.916894 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.64s 2026-02-09 04:50:40.916898 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 1.01s 2026-02-09 04:50:40.916903 | orchestrator | Get timestamp for report file ------------------------------------------- 0.99s 2026-02-09 04:50:40.916907 | orchestrator | Create report output directory ------------------------------------------ 0.95s 2026-02-09 04:50:40.916911 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.89s 2026-02-09 04:50:40.916915 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.84s 2026-02-09 04:50:40.916920 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.84s 2026-02-09 04:50:40.916924 | orchestrator | Aggregate test results step one ----------------------------------------- 0.77s 2026-02-09 04:50:40.916928 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.71s 2026-02-09 04:50:40.916933 | orchestrator | Print report file information ------------------------------------------- 0.67s 2026-02-09 04:50:40.916937 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.65s 2026-02-09 04:50:40.916941 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.63s 2026-02-09 04:50:40.916945 | orchestrator | Prepare test data ------------------------------------------------------- 0.62s 2026-02-09 04:50:40.916950 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.58s 2026-02-09 04:50:40.916954 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.57s 2026-02-09 04:50:40.916958 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.51s 2026-02-09 04:50:40.916963 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.40s 2026-02-09 04:50:41.430297 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-09 04:50:41.438961 | orchestrator | + set -e 2026-02-09 04:50:41.439055 | orchestrator | + source /opt/manager-vars.sh 2026-02-09 04:50:41.439079 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-09 04:50:41.439097 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-09 04:50:41.439113 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-09 04:50:41.439130 | orchestrator | ++ CEPH_VERSION=reef 2026-02-09 04:50:41.439147 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-09 04:50:41.439164 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-09 04:50:41.439181 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 04:50:41.439198 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 04:50:41.439215 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-09 04:50:41.439231 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-09 04:50:41.439247 | orchestrator | ++ export ARA=false 2026-02-09 04:50:41.439264 | orchestrator | ++ ARA=false 2026-02-09 04:50:41.439280 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-09 04:50:41.439297 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-09 04:50:41.439313 | orchestrator | ++ export TEMPEST=false 2026-02-09 04:50:41.439330 | orchestrator | ++ TEMPEST=false 2026-02-09 04:50:41.439346 | orchestrator | ++ export IS_ZUUL=true 2026-02-09 04:50:41.439362 | orchestrator | ++ IS_ZUUL=true 2026-02-09 04:50:41.439378 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 04:50:41.439396 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 04:50:41.439411 | orchestrator | ++ export EXTERNAL_API=false 2026-02-09 04:50:41.439428 | orchestrator | ++ EXTERNAL_API=false 2026-02-09 04:50:41.439445 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-09 04:50:41.439462 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-09 04:50:41.439478 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-09 04:50:41.439495 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-09 04:50:41.439513 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-09 04:50:41.439530 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-09 04:50:41.439546 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-09 04:50:41.439563 | orchestrator | + source /etc/os-release 2026-02-09 04:50:41.439580 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-02-09 04:50:41.439597 | orchestrator | ++ NAME=Ubuntu 2026-02-09 04:50:41.439613 | orchestrator | ++ VERSION_ID=24.04 2026-02-09 04:50:41.439631 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-02-09 04:50:41.439647 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-09 04:50:41.439664 | orchestrator | ++ ID=ubuntu 2026-02-09 04:50:41.439682 | orchestrator | ++ ID_LIKE=debian 2026-02-09 04:50:41.439698 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-09 04:50:41.439714 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-09 04:50:41.439730 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-09 04:50:41.439747 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-09 04:50:41.439764 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-09 04:50:41.439780 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-09 04:50:41.439797 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-09 04:50:41.439818 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-09 04:50:41.439864 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-09 04:50:41.472034 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-09 04:51:05.357020 | orchestrator | 2026-02-09 04:51:05.357140 | orchestrator | # Status of Elasticsearch 2026-02-09 04:51:05.357158 | orchestrator | 2026-02-09 04:51:05.357168 | orchestrator | + pushd /opt/configuration/contrib 2026-02-09 04:51:05.357178 | orchestrator | + echo 2026-02-09 04:51:05.357187 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-09 04:51:05.357199 | orchestrator | + echo 2026-02-09 04:51:05.357215 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-09 04:51:05.548802 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-09 04:51:05.549122 | orchestrator | 2026-02-09 04:51:05.549149 | orchestrator | # Status of MariaDB 2026-02-09 04:51:05.549162 | orchestrator | 2026-02-09 04:51:05.549174 | orchestrator | + echo 2026-02-09 04:51:05.549233 | orchestrator | + echo '# Status of MariaDB' 2026-02-09 04:51:05.549245 | orchestrator | + echo 2026-02-09 04:51:05.550267 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-09 04:51:05.605649 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-09 04:51:05.605735 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-09 04:51:05.605748 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-09 04:51:05.605760 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-09 04:51:05.667407 | orchestrator | Reading package lists... 2026-02-09 04:51:06.038469 | orchestrator | Building dependency tree... 2026-02-09 04:51:06.039033 | orchestrator | Reading state information... 2026-02-09 04:51:06.503497 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-09 04:51:06.503621 | orchestrator | bc set to manually installed. 2026-02-09 04:51:06.503647 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-09 04:51:07.250577 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-09 04:51:07.251231 | orchestrator | 2026-02-09 04:51:07.251269 | orchestrator | # Status of Prometheus 2026-02-09 04:51:07.251284 | orchestrator | 2026-02-09 04:51:07.251298 | orchestrator | + echo 2026-02-09 04:51:07.251311 | orchestrator | + echo '# Status of Prometheus' 2026-02-09 04:51:07.251324 | orchestrator | + echo 2026-02-09 04:51:07.251339 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-09 04:51:07.310116 | orchestrator | Unauthorized 2026-02-09 04:51:07.313262 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-09 04:51:07.362830 | orchestrator | Unauthorized 2026-02-09 04:51:07.366815 | orchestrator | 2026-02-09 04:51:07.366905 | orchestrator | # Status of RabbitMQ 2026-02-09 04:51:07.366924 | orchestrator | 2026-02-09 04:51:07.366935 | orchestrator | + echo 2026-02-09 04:51:07.366946 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-09 04:51:07.366957 | orchestrator | + echo 2026-02-09 04:51:07.367543 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-09 04:51:07.440774 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-09 04:51:07.440911 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-09 04:51:07.440943 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-09 04:51:07.975118 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-09 04:51:07.988825 | orchestrator | + echo 2026-02-09 04:51:07.988906 | orchestrator | 2026-02-09 04:51:07.988922 | orchestrator | + echo '# Status of Redis' 2026-02-09 04:51:07.988936 | orchestrator | # Status of Redis 2026-02-09 04:51:07.988947 | orchestrator | 2026-02-09 04:51:07.988969 | orchestrator | + echo 2026-02-09 04:51:07.988990 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-09 04:51:07.994520 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001912s;;;0.000000;10.000000 2026-02-09 04:51:07.995308 | orchestrator | + popd 2026-02-09 04:51:07.995363 | orchestrator | 2026-02-09 04:51:07.995387 | orchestrator | + echo 2026-02-09 04:51:07.995409 | orchestrator | # Create backup of MariaDB database 2026-02-09 04:51:07.995432 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-09 04:51:07.995445 | orchestrator | + echo 2026-02-09 04:51:07.995456 | orchestrator | 2026-02-09 04:51:07.995468 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-09 04:51:10.149541 | orchestrator | 2026-02-09 04:51:10 | INFO  | Task b8c2ce39-97a2-4c4d-95fc-c943b7b3c11d (mariadb_backup) was prepared for execution. 2026-02-09 04:51:10.149633 | orchestrator | 2026-02-09 04:51:10 | INFO  | It takes a moment until task b8c2ce39-97a2-4c4d-95fc-c943b7b3c11d (mariadb_backup) has been started and output is visible here. 2026-02-09 04:51:39.740155 | orchestrator | 2026-02-09 04:51:39.740263 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 04:51:39.740278 | orchestrator | 2026-02-09 04:51:39.740289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 04:51:39.740300 | orchestrator | Monday 09 February 2026 04:51:14 +0000 (0:00:00.191) 0:00:00.191 ******* 2026-02-09 04:51:39.740310 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:51:39.740320 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:51:39.740330 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:51:39.740340 | orchestrator | 2026-02-09 04:51:39.740374 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 04:51:39.740384 | orchestrator | Monday 09 February 2026 04:51:15 +0000 (0:00:00.352) 0:00:00.544 ******* 2026-02-09 04:51:39.740394 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-09 04:51:39.740404 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-09 04:51:39.740414 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-09 04:51:39.740424 | orchestrator | 2026-02-09 04:51:39.740433 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-09 04:51:39.740443 | orchestrator | 2026-02-09 04:51:39.740487 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-09 04:51:39.740499 | orchestrator | Monday 09 February 2026 04:51:15 +0000 (0:00:00.678) 0:00:01.223 ******* 2026-02-09 04:51:39.740509 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 04:51:39.740519 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-09 04:51:39.740529 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-09 04:51:39.740538 | orchestrator | 2026-02-09 04:51:39.740548 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-09 04:51:39.740557 | orchestrator | Monday 09 February 2026 04:51:16 +0000 (0:00:00.492) 0:00:01.715 ******* 2026-02-09 04:51:39.740568 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 04:51:39.740580 | orchestrator | 2026-02-09 04:51:39.740590 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-09 04:51:39.740613 | orchestrator | Monday 09 February 2026 04:51:16 +0000 (0:00:00.649) 0:00:02.365 ******* 2026-02-09 04:51:39.740623 | orchestrator | ok: [testbed-node-0] 2026-02-09 04:51:39.740632 | orchestrator | ok: [testbed-node-2] 2026-02-09 04:51:39.740642 | orchestrator | ok: [testbed-node-1] 2026-02-09 04:51:39.740651 | orchestrator | 2026-02-09 04:51:39.740661 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-09 04:51:39.740671 | orchestrator | Monday 09 February 2026 04:51:20 +0000 (0:00:03.495) 0:00:05.861 ******* 2026-02-09 04:51:39.740680 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-09 04:51:39.740690 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-09 04:51:39.740700 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-09 04:51:39.740711 | orchestrator | mariadb_bootstrap_restart 2026-02-09 04:51:39.740723 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:51:39.740734 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:51:39.740747 | orchestrator | changed: [testbed-node-0] 2026-02-09 04:51:39.740758 | orchestrator | 2026-02-09 04:51:39.740768 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-09 04:51:39.740780 | orchestrator | skipping: no hosts matched 2026-02-09 04:51:39.740790 | orchestrator | 2026-02-09 04:51:39.740801 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-09 04:51:39.740812 | orchestrator | skipping: no hosts matched 2026-02-09 04:51:39.740823 | orchestrator | 2026-02-09 04:51:39.740834 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-09 04:51:39.740845 | orchestrator | skipping: no hosts matched 2026-02-09 04:51:39.740856 | orchestrator | 2026-02-09 04:51:39.740867 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-09 04:51:39.740878 | orchestrator | 2026-02-09 04:51:39.740889 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-09 04:51:39.740901 | orchestrator | Monday 09 February 2026 04:51:38 +0000 (0:00:18.140) 0:00:24.001 ******* 2026-02-09 04:51:39.740944 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:51:39.740956 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:51:39.740967 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:51:39.740978 | orchestrator | 2026-02-09 04:51:39.740989 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-09 04:51:39.741009 | orchestrator | Monday 09 February 2026 04:51:38 +0000 (0:00:00.321) 0:00:24.323 ******* 2026-02-09 04:51:39.741021 | orchestrator | skipping: [testbed-node-0] 2026-02-09 04:51:39.741032 | orchestrator | skipping: [testbed-node-1] 2026-02-09 04:51:39.741043 | orchestrator | skipping: [testbed-node-2] 2026-02-09 04:51:39.741054 | orchestrator | 2026-02-09 04:51:39.741065 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:51:39.741076 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 04:51:39.741086 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 04:51:39.741096 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 04:51:39.741105 | orchestrator | 2026-02-09 04:51:39.741115 | orchestrator | 2026-02-09 04:51:39.741125 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:51:39.741135 | orchestrator | Monday 09 February 2026 04:51:39 +0000 (0:00:00.454) 0:00:24.778 ******* 2026-02-09 04:51:39.741144 | orchestrator | =============================================================================== 2026-02-09 04:51:39.741154 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.14s 2026-02-09 04:51:39.741182 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.50s 2026-02-09 04:51:39.741193 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2026-02-09 04:51:39.741203 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.65s 2026-02-09 04:51:39.741212 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.49s 2026-02-09 04:51:39.741222 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.46s 2026-02-09 04:51:39.741232 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-02-09 04:51:39.741242 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-02-09 04:51:40.112587 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-09 04:51:40.121602 | orchestrator | + set -e 2026-02-09 04:51:40.121676 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 04:51:40.122698 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 04:51:40.122726 | orchestrator | ++ INTERACTIVE=false 2026-02-09 04:51:40.122737 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 04:51:40.122748 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 04:51:40.122759 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-09 04:51:40.125164 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-09 04:51:40.132560 | orchestrator | 2026-02-09 04:51:40.132619 | orchestrator | # OpenStack endpoints 2026-02-09 04:51:40.132635 | orchestrator | 2026-02-09 04:51:40.132646 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 04:51:40.132657 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 04:51:40.132668 | orchestrator | + export OS_CLOUD=admin 2026-02-09 04:51:40.132677 | orchestrator | + OS_CLOUD=admin 2026-02-09 04:51:40.132684 | orchestrator | + echo 2026-02-09 04:51:40.132690 | orchestrator | + echo '# OpenStack endpoints' 2026-02-09 04:51:40.132697 | orchestrator | + echo 2026-02-09 04:51:40.132708 | orchestrator | + openstack endpoint list 2026-02-09 04:51:43.285660 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-09 04:51:43.285727 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-09 04:51:43.285741 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-09 04:51:43.285752 | orchestrator | | 0c089793c14c4887a8781ceb91c93c40 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-09 04:51:43.285815 | orchestrator | | 0d1ddf8b2ef547d387e705a04a1c2ee9 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-09 04:51:43.285827 | orchestrator | | 106c0a23ba6d4490a6c9e84f5a3d899e | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-09 04:51:43.285838 | orchestrator | | 26e6fdbeddd04457a371610d31a5256b | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-09 04:51:43.285850 | orchestrator | | 301acfd8c2f246ea948dd6c764c77e61 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-09 04:51:43.285861 | orchestrator | | 3df339cac6fd46788ea9b4dc0b4db24c | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-09 04:51:43.285871 | orchestrator | | 423014144ad94587a12ecef8af2ad36c | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-09 04:51:43.285882 | orchestrator | | 47a294400ef5406da03836657f79275e | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-09 04:51:43.285902 | orchestrator | | 4b9b3095863c4c519197cc3247e0ff48 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-09 04:51:43.285961 | orchestrator | | 57191edb12b840d88abbd2a0c6d189c5 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-09 04:51:43.285982 | orchestrator | | 5c9a2597640e4f2c83f62a1f8137ff69 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-09 04:51:43.285994 | orchestrator | | 5e23a9ccde114ee0b3fc9a6c846347db | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-09 04:51:43.286005 | orchestrator | | 6b5b693515ca460785b24733a3a9440a | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-09 04:51:43.286063 | orchestrator | | 6b90ba15984f4ac7b5c766360ff241d0 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-09 04:51:43.286077 | orchestrator | | 6dab7867705d4f9da3691e9c32cfa2d5 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-09 04:51:43.286088 | orchestrator | | 765aca0de42d4ae39841213fde628df7 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-09 04:51:43.286099 | orchestrator | | 8421310ece9e4c578c513b3c1752c980 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-09 04:51:43.286110 | orchestrator | | 9f8effb1c874408885396fce39de7e6f | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-09 04:51:43.286121 | orchestrator | | aae7aa3c01594d39970ea2b9865b5b20 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-09 04:51:43.286132 | orchestrator | | ae411cf4c2884a0c82d76f9577c3d9e3 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-09 04:51:43.286159 | orchestrator | | b2a978488afa4a4897790842aa5e39a3 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-09 04:51:43.286181 | orchestrator | | b8b5db0146104482be5c622e050193e5 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-09 04:51:43.286199 | orchestrator | | b94d4deb3ac54d02b100986384cd19f9 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-09 04:51:43.286212 | orchestrator | | ba458b2841fb4f5f8fa36b3406e86369 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-09 04:51:43.286225 | orchestrator | | baa1ec23e7c645c2bba81843abd43263 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-09 04:51:43.286237 | orchestrator | | e07b8f0d5f4644a4933f4f13228ef92e | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-09 04:51:43.286250 | orchestrator | | e9a65344a8e547e1b33cdc79a34a32bf | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-09 04:51:43.286262 | orchestrator | | ed9f01db88c047228c0a1c856ffd5678 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-09 04:51:43.286274 | orchestrator | | ef4e4109668d4675b2792aee4c32c39f | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-09 04:51:43.286286 | orchestrator | | f2de77cc01424baa8688fb0b4ac39280 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-09 04:51:43.286298 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-09 04:51:43.596361 | orchestrator | 2026-02-09 04:51:43.596471 | orchestrator | # Cinder 2026-02-09 04:51:43.596483 | orchestrator | 2026-02-09 04:51:43.596491 | orchestrator | + echo 2026-02-09 04:51:43.596500 | orchestrator | + echo '# Cinder' 2026-02-09 04:51:43.596509 | orchestrator | + echo 2026-02-09 04:51:43.596517 | orchestrator | + openstack volume service list 2026-02-09 04:51:46.429466 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-09 04:51:46.429571 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-09 04:51:46.429578 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-09 04:51:46.429582 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-09T04:51:36.000000 | 2026-02-09 04:51:46.429586 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-09T04:51:36.000000 | 2026-02-09 04:51:46.429591 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-09T04:51:36.000000 | 2026-02-09 04:51:46.429595 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-09T04:51:36.000000 | 2026-02-09 04:51:46.429598 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-09T04:51:42.000000 | 2026-02-09 04:51:46.429602 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-09T04:51:43.000000 | 2026-02-09 04:51:46.429606 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-09T04:51:39.000000 | 2026-02-09 04:51:46.429610 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-09T04:51:41.000000 | 2026-02-09 04:51:46.429613 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-09T04:51:42.000000 | 2026-02-09 04:51:46.429643 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-09 04:51:46.732218 | orchestrator | 2026-02-09 04:51:46.732321 | orchestrator | # Neutron 2026-02-09 04:51:46.732336 | orchestrator | 2026-02-09 04:51:46.732348 | orchestrator | + echo 2026-02-09 04:51:46.732360 | orchestrator | + echo '# Neutron' 2026-02-09 04:51:46.732374 | orchestrator | + echo 2026-02-09 04:51:46.732385 | orchestrator | + openstack network agent list 2026-02-09 04:51:49.535021 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-09 04:51:49.535116 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-09 04:51:49.535129 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-09 04:51:49.535139 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-09 04:51:49.535147 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-09 04:51:49.535156 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-09 04:51:49.535164 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-09 04:51:49.535190 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-09 04:51:49.535199 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-09 04:51:49.535208 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-09 04:51:49.535216 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-09 04:51:49.535225 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-09 04:51:49.535234 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-09 04:51:49.840454 | orchestrator | + openstack network service provider list 2026-02-09 04:51:52.521820 | orchestrator | +---------------+------+---------+ 2026-02-09 04:51:52.522076 | orchestrator | | Service Type | Name | Default | 2026-02-09 04:51:52.522095 | orchestrator | +---------------+------+---------+ 2026-02-09 04:51:52.522106 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-09 04:51:52.522116 | orchestrator | +---------------+------+---------+ 2026-02-09 04:51:52.847528 | orchestrator | 2026-02-09 04:51:52.847645 | orchestrator | # Nova 2026-02-09 04:51:52.847672 | orchestrator | 2026-02-09 04:51:52.847684 | orchestrator | + echo 2026-02-09 04:51:52.847694 | orchestrator | + echo '# Nova' 2026-02-09 04:51:52.847704 | orchestrator | + echo 2026-02-09 04:51:52.847715 | orchestrator | + openstack compute service list 2026-02-09 04:51:56.350805 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-09 04:51:56.351004 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-09 04:51:56.351022 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-09 04:51:56.351035 | orchestrator | | c30bd268-e09a-4f42-ab94-40fc48b7dbf9 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-09T04:51:53.000000 | 2026-02-09 04:51:56.351081 | orchestrator | | 066e4725-d172-4821-9bf1-fa1525984dff | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-09T04:51:46.000000 | 2026-02-09 04:51:56.351093 | orchestrator | | 74e295d4-8067-4f9e-81dd-b4be0dc952aa | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-09T04:51:47.000000 | 2026-02-09 04:51:56.351104 | orchestrator | | b4b04754-a1ca-42ab-9e6e-0a32bf986e09 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-09T04:51:50.000000 | 2026-02-09 04:51:56.351115 | orchestrator | | 2b36cefb-9663-493b-9c7b-1ceac3fcdfdc | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-09T04:51:51.000000 | 2026-02-09 04:51:56.351126 | orchestrator | | 02d9742d-6ef2-4824-898c-0e320953792f | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-09T04:51:51.000000 | 2026-02-09 04:51:56.351137 | orchestrator | | 4d44734e-1c07-46e6-b439-efbe2bf08d56 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-09T04:51:47.000000 | 2026-02-09 04:51:56.351147 | orchestrator | | 8eefb50f-00cf-4640-8765-291868a61d79 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-09T04:51:47.000000 | 2026-02-09 04:51:56.351158 | orchestrator | | bc33ab0f-aa1a-45ac-bf95-b1c35e1ea1ae | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-09T04:51:48.000000 | 2026-02-09 04:51:56.351169 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-09 04:51:56.675824 | orchestrator | + openstack hypervisor list 2026-02-09 04:51:59.734008 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-09 04:51:59.734182 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-09 04:51:59.734205 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-09 04:51:59.734221 | orchestrator | | 5edec9ee-0121-43b4-93f9-de48b00b4e5a | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-09 04:51:59.734235 | orchestrator | | c3a015d1-071f-4e21-837a-b0b017df445f | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-09 04:51:59.734249 | orchestrator | | 901bd17d-12ca-45a2-a6c2-4f2af432103f | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-09 04:51:59.734265 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-09 04:52:00.090681 | orchestrator | 2026-02-09 04:52:00.090777 | orchestrator | # Run OpenStack test play 2026-02-09 04:52:00.090792 | orchestrator | 2026-02-09 04:52:00.090809 | orchestrator | + echo 2026-02-09 04:52:00.090821 | orchestrator | + echo '# Run OpenStack test play' 2026-02-09 04:52:00.090833 | orchestrator | + echo 2026-02-09 04:52:00.090845 | orchestrator | + osism apply --environment openstack test 2026-02-09 04:52:02.273832 | orchestrator | 2026-02-09 04:52:02 | INFO  | Trying to run play test in environment openstack 2026-02-09 04:52:12.408699 | orchestrator | 2026-02-09 04:52:12 | INFO  | Task 1a2c6329-af1a-40f2-9415-599d140c8aab (test) was prepared for execution. 2026-02-09 04:52:12.408800 | orchestrator | 2026-02-09 04:52:12 | INFO  | It takes a moment until task 1a2c6329-af1a-40f2-9415-599d140c8aab (test) has been started and output is visible here. 2026-02-09 04:55:04.369030 | orchestrator | 2026-02-09 04:55:04.369109 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-09 04:55:04.369116 | orchestrator | 2026-02-09 04:55:04.369121 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-09 04:55:04.369126 | orchestrator | Monday 09 February 2026 04:52:17 +0000 (0:00:00.088) 0:00:00.088 ******* 2026-02-09 04:55:04.369130 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369135 | orchestrator | 2026-02-09 04:55:04.369139 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-09 04:55:04.369142 | orchestrator | Monday 09 February 2026 04:52:21 +0000 (0:00:03.854) 0:00:03.943 ******* 2026-02-09 04:55:04.369146 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369224 | orchestrator | 2026-02-09 04:55:04.369245 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-09 04:55:04.369250 | orchestrator | Monday 09 February 2026 04:52:25 +0000 (0:00:04.570) 0:00:08.513 ******* 2026-02-09 04:55:04.369254 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369257 | orchestrator | 2026-02-09 04:55:04.369261 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-09 04:55:04.369265 | orchestrator | Monday 09 February 2026 04:52:32 +0000 (0:00:06.821) 0:00:15.335 ******* 2026-02-09 04:55:04.369269 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369273 | orchestrator | 2026-02-09 04:55:04.369277 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-09 04:55:04.369280 | orchestrator | Monday 09 February 2026 04:52:36 +0000 (0:00:04.253) 0:00:19.589 ******* 2026-02-09 04:55:04.369284 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369288 | orchestrator | 2026-02-09 04:55:04.369292 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-09 04:55:04.369296 | orchestrator | Monday 09 February 2026 04:52:41 +0000 (0:00:04.303) 0:00:23.892 ******* 2026-02-09 04:55:04.369300 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-09 04:55:04.369304 | orchestrator | changed: [localhost] => (item=member) 2026-02-09 04:55:04.369309 | orchestrator | changed: [localhost] => (item=creator) 2026-02-09 04:55:04.369313 | orchestrator | 2026-02-09 04:55:04.369316 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-09 04:55:04.369320 | orchestrator | Monday 09 February 2026 04:52:53 +0000 (0:00:12.360) 0:00:36.253 ******* 2026-02-09 04:55:04.369324 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369327 | orchestrator | 2026-02-09 04:55:04.369331 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-09 04:55:04.369335 | orchestrator | Monday 09 February 2026 04:52:58 +0000 (0:00:04.527) 0:00:40.781 ******* 2026-02-09 04:55:04.369338 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369342 | orchestrator | 2026-02-09 04:55:04.369346 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-09 04:55:04.369349 | orchestrator | Monday 09 February 2026 04:53:03 +0000 (0:00:05.127) 0:00:45.909 ******* 2026-02-09 04:55:04.369353 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369357 | orchestrator | 2026-02-09 04:55:04.369360 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-09 04:55:04.369364 | orchestrator | Monday 09 February 2026 04:53:07 +0000 (0:00:04.439) 0:00:50.348 ******* 2026-02-09 04:55:04.369368 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369371 | orchestrator | 2026-02-09 04:55:04.369375 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-09 04:55:04.369379 | orchestrator | Monday 09 February 2026 04:53:11 +0000 (0:00:04.256) 0:00:54.605 ******* 2026-02-09 04:55:04.369383 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369386 | orchestrator | 2026-02-09 04:55:04.369390 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-09 04:55:04.369394 | orchestrator | Monday 09 February 2026 04:53:16 +0000 (0:00:04.437) 0:00:59.043 ******* 2026-02-09 04:55:04.369397 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369401 | orchestrator | 2026-02-09 04:55:04.369405 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-09 04:55:04.369408 | orchestrator | Monday 09 February 2026 04:53:20 +0000 (0:00:04.459) 0:01:03.502 ******* 2026-02-09 04:55:04.369412 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369415 | orchestrator | 2026-02-09 04:55:04.369420 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-09 04:55:04.369423 | orchestrator | Monday 09 February 2026 04:53:25 +0000 (0:00:05.102) 0:01:08.604 ******* 2026-02-09 04:55:04.369427 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369431 | orchestrator | 2026-02-09 04:55:04.369434 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-09 04:55:04.369438 | orchestrator | Monday 09 February 2026 04:53:31 +0000 (0:00:05.299) 0:01:13.904 ******* 2026-02-09 04:55:04.369446 | orchestrator | changed: [localhost] 2026-02-09 04:55:04.369449 | orchestrator | 2026-02-09 04:55:04.369453 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-09 04:55:04.369457 | orchestrator | 2026-02-09 04:55:04.369461 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-09 04:55:04.369464 | orchestrator | Monday 09 February 2026 04:53:41 +0000 (0:00:10.389) 0:01:24.294 ******* 2026-02-09 04:55:04.369468 | orchestrator | ok: [localhost] 2026-02-09 04:55:04.369472 | orchestrator | 2026-02-09 04:55:04.369476 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-09 04:55:04.369480 | orchestrator | Monday 09 February 2026 04:53:45 +0000 (0:00:03.827) 0:01:28.121 ******* 2026-02-09 04:55:04.369483 | orchestrator | skipping: [localhost] 2026-02-09 04:55:04.369487 | orchestrator | 2026-02-09 04:55:04.369491 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-09 04:55:04.369495 | orchestrator | Monday 09 February 2026 04:53:45 +0000 (0:00:00.046) 0:01:28.167 ******* 2026-02-09 04:55:04.369498 | orchestrator | skipping: [localhost] 2026-02-09 04:55:04.369502 | orchestrator | 2026-02-09 04:55:04.369506 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-09 04:55:04.369510 | orchestrator | Monday 09 February 2026 04:53:45 +0000 (0:00:00.067) 0:01:28.235 ******* 2026-02-09 04:55:04.369524 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-09 04:55:04.369528 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-09 04:55:04.369542 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-09 04:55:04.369546 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-09 04:55:04.369550 | orchestrator | skipping: [localhost] => (item=test)  2026-02-09 04:55:04.369554 | orchestrator | skipping: [localhost] 2026-02-09 04:55:04.369558 | orchestrator | 2026-02-09 04:55:04.369561 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-09 04:55:04.369565 | orchestrator | Monday 09 February 2026 04:53:45 +0000 (0:00:00.185) 0:01:28.421 ******* 2026-02-09 04:55:04.369569 | orchestrator | skipping: [localhost] 2026-02-09 04:55:04.369573 | orchestrator | 2026-02-09 04:55:04.369576 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-09 04:55:04.369580 | orchestrator | Monday 09 February 2026 04:53:45 +0000 (0:00:00.167) 0:01:28.588 ******* 2026-02-09 04:55:04.369584 | orchestrator | changed: [localhost] => (item=test) 2026-02-09 04:55:04.369588 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-09 04:55:04.369591 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-09 04:55:04.369595 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-09 04:55:04.369600 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-09 04:55:04.369604 | orchestrator | 2026-02-09 04:55:04.369609 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-09 04:55:04.369613 | orchestrator | Monday 09 February 2026 04:53:50 +0000 (0:00:04.895) 0:01:33.484 ******* 2026-02-09 04:55:04.369617 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-09 04:55:04.369625 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-09 04:55:04.369631 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-09 04:55:04.369636 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-09 04:55:04.369642 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (56 retries left). 2026-02-09 04:55:04.369650 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j664527795680.3684', 'results_file': '/ansible/.ansible_async/j664527795680.3684', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:04.369659 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j352320037703.3709', 'results_file': '/ansible/.ansible_async/j352320037703.3709', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:04.369670 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j953878466124.3734', 'results_file': '/ansible/.ansible_async/j953878466124.3734', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:04.369676 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j716521559294.3759', 'results_file': '/ansible/.ansible_async/j716521559294.3759', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:04.369683 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j366128808215.3784', 'results_file': '/ansible/.ansible_async/j366128808215.3784', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:04.369693 | orchestrator | 2026-02-09 04:55:04.369700 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-09 04:55:04.369707 | orchestrator | Monday 09 February 2026 04:54:48 +0000 (0:00:58.143) 0:02:31.627 ******* 2026-02-09 04:55:04.369713 | orchestrator | changed: [localhost] => (item=test) 2026-02-09 04:55:04.369719 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-09 04:55:04.369725 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-09 04:55:04.369731 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-09 04:55:04.369738 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-09 04:55:04.369746 | orchestrator | 2026-02-09 04:55:04.369752 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-09 04:55:04.369760 | orchestrator | Monday 09 February 2026 04:54:54 +0000 (0:00:05.399) 0:02:37.027 ******* 2026-02-09 04:55:04.369766 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-09 04:55:04.369773 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j9852760180.3895', 'results_file': '/ansible/.ansible_async/j9852760180.3895', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:04.369781 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j462065946910.3920', 'results_file': '/ansible/.ansible_async/j462065946910.3920', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:04.369787 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j377455134001.3945', 'results_file': '/ansible/.ansible_async/j377455134001.3945', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:04.369807 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j280152005195.3970', 'results_file': '/ansible/.ansible_async/j280152005195.3970', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:45.450675 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j525275130274.3995', 'results_file': '/ansible/.ansible_async/j525275130274.3995', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:45.450815 | orchestrator | 2026-02-09 04:55:45.450831 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-09 04:55:45.450844 | orchestrator | Monday 09 February 2026 04:55:04 +0000 (0:00:09.947) 0:02:46.975 ******* 2026-02-09 04:55:45.450855 | orchestrator | changed: [localhost] => (item=test) 2026-02-09 04:55:45.450867 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-09 04:55:45.450877 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-09 04:55:45.450887 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-09 04:55:45.450896 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-09 04:55:45.450906 | orchestrator | 2026-02-09 04:55:45.450942 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-09 04:55:45.450953 | orchestrator | Monday 09 February 2026 04:55:09 +0000 (0:00:04.816) 0:02:51.792 ******* 2026-02-09 04:55:45.450962 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-09 04:55:45.450974 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j709732098513.4064', 'results_file': '/ansible/.ansible_async/j709732098513.4064', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:45.450985 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j553871228708.4089', 'results_file': '/ansible/.ansible_async/j553871228708.4089', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:45.450995 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j363462892289.4115', 'results_file': '/ansible/.ansible_async/j363462892289.4115', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:45.451005 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j416169554969.4141', 'results_file': '/ansible/.ansible_async/j416169554969.4141', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:45.451015 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j164736490335.4167', 'results_file': '/ansible/.ansible_async/j164736490335.4167', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-09 04:55:45.451024 | orchestrator | 2026-02-09 04:55:45.451034 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-09 04:55:45.451043 | orchestrator | Monday 09 February 2026 04:55:20 +0000 (0:00:10.875) 0:03:02.668 ******* 2026-02-09 04:55:45.451053 | orchestrator | changed: [localhost] 2026-02-09 04:55:45.451063 | orchestrator | 2026-02-09 04:55:45.451072 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-09 04:55:45.451082 | orchestrator | Monday 09 February 2026 04:55:26 +0000 (0:00:06.315) 0:03:08.983 ******* 2026-02-09 04:55:45.451091 | orchestrator | changed: [localhost] 2026-02-09 04:55:45.451101 | orchestrator | 2026-02-09 04:55:45.451110 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-09 04:55:45.451120 | orchestrator | Monday 09 February 2026 04:55:39 +0000 (0:00:13.401) 0:03:22.385 ******* 2026-02-09 04:55:45.451130 | orchestrator | ok: [localhost] 2026-02-09 04:55:45.451140 | orchestrator | 2026-02-09 04:55:45.451150 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-09 04:55:45.451159 | orchestrator | Monday 09 February 2026 04:55:44 +0000 (0:00:05.106) 0:03:27.491 ******* 2026-02-09 04:55:45.451169 | orchestrator | ok: [localhost] => { 2026-02-09 04:55:45.451179 | orchestrator |  "msg": "192.168.112.162" 2026-02-09 04:55:45.451190 | orchestrator | } 2026-02-09 04:55:45.451238 | orchestrator | 2026-02-09 04:55:45.451249 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 04:55:45.451262 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 04:55:45.451275 | orchestrator | 2026-02-09 04:55:45.451286 | orchestrator | 2026-02-09 04:55:45.451297 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 04:55:45.451308 | orchestrator | Monday 09 February 2026 04:55:44 +0000 (0:00:00.070) 0:03:27.561 ******* 2026-02-09 04:55:45.451319 | orchestrator | =============================================================================== 2026-02-09 04:55:45.451330 | orchestrator | Wait for instance creation to complete --------------------------------- 58.14s 2026-02-09 04:55:45.451341 | orchestrator | Attach test volume ----------------------------------------------------- 13.40s 2026-02-09 04:55:45.451352 | orchestrator | Add member roles to user test ------------------------------------------ 12.36s 2026-02-09 04:55:45.451389 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.88s 2026-02-09 04:55:45.451402 | orchestrator | Create test router ----------------------------------------------------- 10.39s 2026-02-09 04:55:45.451413 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.95s 2026-02-09 04:55:45.451424 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.82s 2026-02-09 04:55:45.451453 | orchestrator | Create test volume ------------------------------------------------------ 6.32s 2026-02-09 04:55:45.451465 | orchestrator | Add metadata to instances ----------------------------------------------- 5.40s 2026-02-09 04:55:45.451476 | orchestrator | Create test subnet ------------------------------------------------------ 5.30s 2026-02-09 04:55:45.451487 | orchestrator | Create ssh security group ----------------------------------------------- 5.13s 2026-02-09 04:55:45.451498 | orchestrator | Create floating ip address ---------------------------------------------- 5.11s 2026-02-09 04:55:45.451510 | orchestrator | Create test network ----------------------------------------------------- 5.10s 2026-02-09 04:55:45.451520 | orchestrator | Create test instances --------------------------------------------------- 4.90s 2026-02-09 04:55:45.451531 | orchestrator | Add tag to instances ---------------------------------------------------- 4.82s 2026-02-09 04:55:45.451541 | orchestrator | Create test-admin user -------------------------------------------------- 4.57s 2026-02-09 04:55:45.451551 | orchestrator | Create test server group ------------------------------------------------ 4.53s 2026-02-09 04:55:45.451560 | orchestrator | Create test keypair ----------------------------------------------------- 4.46s 2026-02-09 04:55:45.451570 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.44s 2026-02-09 04:55:45.451579 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.44s 2026-02-09 04:55:45.840148 | orchestrator | + server_list 2026-02-09 04:55:45.840282 | orchestrator | + openstack --os-cloud test server list 2026-02-09 04:55:49.565581 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-09 04:55:49.565698 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-09 04:55:49.565709 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-09 04:55:49.565717 | orchestrator | | de330e1e-53d3-4d1b-bf6d-cfaccc91b83c | test-3 | ACTIVE | test=192.168.112.198, 192.168.200.248 | N/A (booted from volume) | SCS-1L-1 | 2026-02-09 04:55:49.565724 | orchestrator | | 5b685b6b-2bf4-4d8c-bf62-1a5c6bdc00ec | test-2 | ACTIVE | test=192.168.112.173, 192.168.200.213 | N/A (booted from volume) | SCS-1L-1 | 2026-02-09 04:55:49.565731 | orchestrator | | c50dedd3-a7dd-4cba-b957-47fe10f077e5 | test-4 | ACTIVE | test=192.168.112.166, 192.168.200.13 | N/A (booted from volume) | SCS-1L-1 | 2026-02-09 04:55:49.565739 | orchestrator | | 52af2773-2f53-4c7b-8501-574dfec7b1ee | test-1 | ACTIVE | test=192.168.112.184, 192.168.200.126 | N/A (booted from volume) | SCS-1L-1 | 2026-02-09 04:55:49.565746 | orchestrator | | 65cde3e8-7445-47ba-8139-a4c4dc3f1202 | test | ACTIVE | test=192.168.112.162, 192.168.200.57 | N/A (booted from volume) | SCS-1L-1 | 2026-02-09 04:55:49.565753 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-09 04:55:49.865554 | orchestrator | + openstack --os-cloud test server show test 2026-02-09 04:55:53.597805 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:55:53.597956 | orchestrator | | Field | Value | 2026-02-09 04:55:53.597996 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:55:53.598074 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-09 04:55:53.598091 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-09 04:55:53.598102 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-09 04:55:53.598114 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-09 04:55:53.598125 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-09 04:55:53.598136 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-09 04:55:53.598169 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-09 04:55:53.598181 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-09 04:55:53.598238 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-09 04:55:53.598286 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-09 04:55:53.598306 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-09 04:55:53.598320 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-09 04:55:53.598333 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-09 04:55:53.598345 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-09 04:55:53.598359 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-09 04:55:53.598372 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-09T04:54:24.000000 | 2026-02-09 04:55:53.598395 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-09 04:55:53.598423 | orchestrator | | accessIPv4 | | 2026-02-09 04:55:53.598439 | orchestrator | | accessIPv6 | | 2026-02-09 04:55:53.598452 | orchestrator | | addresses | test=192.168.112.162, 192.168.200.57 | 2026-02-09 04:55:53.598469 | orchestrator | | config_drive | | 2026-02-09 04:55:53.598482 | orchestrator | | created | 2026-02-09T04:53:55Z | 2026-02-09 04:55:53.598494 | orchestrator | | description | None | 2026-02-09 04:55:53.598507 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-09 04:55:53.598520 | orchestrator | | hostId | 2919c59ed8ea0a7f43cdf4b25ca4bb520d66def0e6499c33a41b66e3 | 2026-02-09 04:55:53.598533 | orchestrator | | host_status | None | 2026-02-09 04:55:53.598562 | orchestrator | | id | 65cde3e8-7445-47ba-8139-a4c4dc3f1202 | 2026-02-09 04:55:53.598575 | orchestrator | | image | N/A (booted from volume) | 2026-02-09 04:55:53.598589 | orchestrator | | key_name | test | 2026-02-09 04:55:53.598602 | orchestrator | | locked | False | 2026-02-09 04:55:53.598616 | orchestrator | | locked_reason | None | 2026-02-09 04:55:53.598629 | orchestrator | | name | test | 2026-02-09 04:55:53.598642 | orchestrator | | pinned_availability_zone | None | 2026-02-09 04:55:53.598653 | orchestrator | | progress | 0 | 2026-02-09 04:55:53.598664 | orchestrator | | project_id | c4517bac21914a46bd4ada79fdef4cd5 | 2026-02-09 04:55:53.598675 | orchestrator | | properties | hostname='test' | 2026-02-09 04:55:53.598712 | orchestrator | | security_groups | name='icmp' | 2026-02-09 04:55:53.598724 | orchestrator | | | name='ssh' | 2026-02-09 04:55:53.598736 | orchestrator | | server_groups | None | 2026-02-09 04:55:53.598747 | orchestrator | | status | ACTIVE | 2026-02-09 04:55:53.598763 | orchestrator | | tags | test | 2026-02-09 04:55:53.598774 | orchestrator | | trusted_image_certificates | None | 2026-02-09 04:55:53.598785 | orchestrator | | updated | 2026-02-09T04:54:55Z | 2026-02-09 04:55:53.598796 | orchestrator | | user_id | 03249ec1dd6e42168653d0f815e14472 | 2026-02-09 04:55:53.598807 | orchestrator | | volumes_attached | delete_on_termination='True', id='46d3d9a6-0d16-4b70-a45e-90b0ee906ed2' | 2026-02-09 04:55:53.598825 | orchestrator | | | delete_on_termination='False', id='4ab0aca0-67ea-4720-a5ab-72d3d9791f90' | 2026-02-09 04:55:53.603131 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:55:54.056749 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-09 04:55:57.422383 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:55:57.422503 | orchestrator | | Field | Value | 2026-02-09 04:55:57.422537 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:55:57.422550 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-09 04:55:57.422561 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-09 04:55:57.422573 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-09 04:55:57.422584 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-09 04:55:57.422617 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-09 04:55:57.422629 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-09 04:55:57.422661 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-09 04:55:57.422673 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-09 04:55:57.422684 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-09 04:55:57.422700 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-09 04:55:57.422711 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-09 04:55:57.422723 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-09 04:55:57.422733 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-09 04:55:57.422752 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-09 04:55:57.422763 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-09 04:55:57.422780 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-09T04:54:23.000000 | 2026-02-09 04:55:57.422810 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-09 04:55:57.422830 | orchestrator | | accessIPv4 | | 2026-02-09 04:55:57.422850 | orchestrator | | accessIPv6 | | 2026-02-09 04:55:57.422876 | orchestrator | | addresses | test=192.168.112.184, 192.168.200.126 | 2026-02-09 04:55:57.422897 | orchestrator | | config_drive | | 2026-02-09 04:55:57.422917 | orchestrator | | created | 2026-02-09T04:53:55Z | 2026-02-09 04:55:57.422948 | orchestrator | | description | None | 2026-02-09 04:55:57.422969 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-09 04:55:57.422989 | orchestrator | | hostId | 2919c59ed8ea0a7f43cdf4b25ca4bb520d66def0e6499c33a41b66e3 | 2026-02-09 04:55:57.423008 | orchestrator | | host_status | None | 2026-02-09 04:55:57.423039 | orchestrator | | id | 52af2773-2f53-4c7b-8501-574dfec7b1ee | 2026-02-09 04:55:57.423060 | orchestrator | | image | N/A (booted from volume) | 2026-02-09 04:55:57.423078 | orchestrator | | key_name | test | 2026-02-09 04:55:57.423106 | orchestrator | | locked | False | 2026-02-09 04:55:57.423127 | orchestrator | | locked_reason | None | 2026-02-09 04:55:57.423146 | orchestrator | | name | test-1 | 2026-02-09 04:55:57.423180 | orchestrator | | pinned_availability_zone | None | 2026-02-09 04:55:57.423198 | orchestrator | | progress | 0 | 2026-02-09 04:55:57.423249 | orchestrator | | project_id | c4517bac21914a46bd4ada79fdef4cd5 | 2026-02-09 04:55:57.423261 | orchestrator | | properties | hostname='test-1' | 2026-02-09 04:55:57.423284 | orchestrator | | security_groups | name='icmp' | 2026-02-09 04:55:57.423296 | orchestrator | | | name='ssh' | 2026-02-09 04:55:57.423308 | orchestrator | | server_groups | None | 2026-02-09 04:55:57.423319 | orchestrator | | status | ACTIVE | 2026-02-09 04:55:57.423333 | orchestrator | | tags | test | 2026-02-09 04:55:57.423362 | orchestrator | | trusted_image_certificates | None | 2026-02-09 04:55:57.423382 | orchestrator | | updated | 2026-02-09T04:54:56Z | 2026-02-09 04:55:57.423401 | orchestrator | | user_id | 03249ec1dd6e42168653d0f815e14472 | 2026-02-09 04:55:57.423419 | orchestrator | | volumes_attached | delete_on_termination='True', id='dd2b3530-bf29-4c0f-95b0-11f4c009e882' | 2026-02-09 04:55:57.427092 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:55:57.755939 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-09 04:56:00.877443 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:56:00.877585 | orchestrator | | Field | Value | 2026-02-09 04:56:00.877626 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:56:00.877644 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-09 04:56:00.877683 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-09 04:56:00.877695 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-09 04:56:00.877707 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-09 04:56:00.877718 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-09 04:56:00.877730 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-09 04:56:00.877761 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-09 04:56:00.877774 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-09 04:56:00.877785 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-09 04:56:00.877796 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-09 04:56:00.877821 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-09 04:56:00.877833 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-09 04:56:00.877851 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-09 04:56:00.877872 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-09 04:56:00.877892 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-09 04:56:00.877912 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-09T04:54:22.000000 | 2026-02-09 04:56:00.877942 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-09 04:56:00.877964 | orchestrator | | accessIPv4 | | 2026-02-09 04:56:00.877986 | orchestrator | | accessIPv6 | | 2026-02-09 04:56:00.878107 | orchestrator | | addresses | test=192.168.112.173, 192.168.200.213 | 2026-02-09 04:56:00.878175 | orchestrator | | config_drive | | 2026-02-09 04:56:00.878191 | orchestrator | | created | 2026-02-09T04:53:58Z | 2026-02-09 04:56:00.878205 | orchestrator | | description | None | 2026-02-09 04:56:00.878258 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-09 04:56:00.878273 | orchestrator | | hostId | 86f43cb0a8cdcb5bd6531c68b850017529b318865f523aa1970c3fc2 | 2026-02-09 04:56:00.878287 | orchestrator | | host_status | None | 2026-02-09 04:56:00.878313 | orchestrator | | id | 5b685b6b-2bf4-4d8c-bf62-1a5c6bdc00ec | 2026-02-09 04:56:00.878326 | orchestrator | | image | N/A (booted from volume) | 2026-02-09 04:56:00.878337 | orchestrator | | key_name | test | 2026-02-09 04:56:00.878368 | orchestrator | | locked | False | 2026-02-09 04:56:00.878379 | orchestrator | | locked_reason | None | 2026-02-09 04:56:00.878391 | orchestrator | | name | test-2 | 2026-02-09 04:56:00.878402 | orchestrator | | pinned_availability_zone | None | 2026-02-09 04:56:00.878413 | orchestrator | | progress | 0 | 2026-02-09 04:56:00.878424 | orchestrator | | project_id | c4517bac21914a46bd4ada79fdef4cd5 | 2026-02-09 04:56:00.878436 | orchestrator | | properties | hostname='test-2' | 2026-02-09 04:56:00.878455 | orchestrator | | security_groups | name='icmp' | 2026-02-09 04:56:00.878467 | orchestrator | | | name='ssh' | 2026-02-09 04:56:00.878486 | orchestrator | | server_groups | None | 2026-02-09 04:56:00.878503 | orchestrator | | status | ACTIVE | 2026-02-09 04:56:00.878515 | orchestrator | | tags | test | 2026-02-09 04:56:00.878526 | orchestrator | | trusted_image_certificates | None | 2026-02-09 04:56:00.878537 | orchestrator | | updated | 2026-02-09T04:54:56Z | 2026-02-09 04:56:00.878549 | orchestrator | | user_id | 03249ec1dd6e42168653d0f815e14472 | 2026-02-09 04:56:00.878560 | orchestrator | | volumes_attached | delete_on_termination='True', id='f32a3294-9fb6-4577-8712-1c15a81fa8e0' | 2026-02-09 04:56:00.881793 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:56:01.242462 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-09 04:56:04.438599 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:56:04.438790 | orchestrator | | Field | Value | 2026-02-09 04:56:04.438825 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:56:04.439663 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-09 04:56:04.439685 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-09 04:56:04.439697 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-09 04:56:04.439709 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-09 04:56:04.439720 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-09 04:56:04.439732 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-09 04:56:04.439763 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-09 04:56:04.439786 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-09 04:56:04.439798 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-09 04:56:04.439809 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-09 04:56:04.439826 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-09 04:56:04.439837 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-09 04:56:04.439848 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-09 04:56:04.439859 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-09 04:56:04.439870 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-09 04:56:04.439881 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-09T04:54:22.000000 | 2026-02-09 04:56:04.439902 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-09 04:56:04.439921 | orchestrator | | accessIPv4 | | 2026-02-09 04:56:04.439932 | orchestrator | | accessIPv6 | | 2026-02-09 04:56:04.439944 | orchestrator | | addresses | test=192.168.112.198, 192.168.200.248 | 2026-02-09 04:56:04.440447 | orchestrator | | config_drive | | 2026-02-09 04:56:04.440465 | orchestrator | | created | 2026-02-09T04:53:59Z | 2026-02-09 04:56:04.440476 | orchestrator | | description | None | 2026-02-09 04:56:04.440487 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-09 04:56:04.440499 | orchestrator | | hostId | 86f43cb0a8cdcb5bd6531c68b850017529b318865f523aa1970c3fc2 | 2026-02-09 04:56:04.440510 | orchestrator | | host_status | None | 2026-02-09 04:56:04.440541 | orchestrator | | id | de330e1e-53d3-4d1b-bf6d-cfaccc91b83c | 2026-02-09 04:56:04.440558 | orchestrator | | image | N/A (booted from volume) | 2026-02-09 04:56:04.440569 | orchestrator | | key_name | test | 2026-02-09 04:56:04.440581 | orchestrator | | locked | False | 2026-02-09 04:56:04.440592 | orchestrator | | locked_reason | None | 2026-02-09 04:56:04.440603 | orchestrator | | name | test-3 | 2026-02-09 04:56:04.440614 | orchestrator | | pinned_availability_zone | None | 2026-02-09 04:56:04.440625 | orchestrator | | progress | 0 | 2026-02-09 04:56:04.440636 | orchestrator | | project_id | c4517bac21914a46bd4ada79fdef4cd5 | 2026-02-09 04:56:04.440655 | orchestrator | | properties | hostname='test-3' | 2026-02-09 04:56:04.440675 | orchestrator | | security_groups | name='icmp' | 2026-02-09 04:56:04.440692 | orchestrator | | | name='ssh' | 2026-02-09 04:56:04.440703 | orchestrator | | server_groups | None | 2026-02-09 04:56:04.440715 | orchestrator | | status | ACTIVE | 2026-02-09 04:56:04.440726 | orchestrator | | tags | test | 2026-02-09 04:56:04.440737 | orchestrator | | trusted_image_certificates | None | 2026-02-09 04:56:04.440748 | orchestrator | | updated | 2026-02-09T04:54:57Z | 2026-02-09 04:56:04.440759 | orchestrator | | user_id | 03249ec1dd6e42168653d0f815e14472 | 2026-02-09 04:56:04.440777 | orchestrator | | volumes_attached | delete_on_termination='True', id='8370b843-d03e-4523-b9df-c65ef20b4e35' | 2026-02-09 04:56:04.442502 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:56:04.778931 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-09 04:56:08.121926 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:56:08.122142 | orchestrator | | Field | Value | 2026-02-09 04:56:08.122162 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:56:08.122173 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-09 04:56:08.122182 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-09 04:56:08.122191 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-09 04:56:08.122200 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-09 04:56:08.122282 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-09 04:56:08.122293 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-09 04:56:08.122323 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-09 04:56:08.122333 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-09 04:56:08.122348 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-09 04:56:08.122357 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-09 04:56:08.122366 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-09 04:56:08.122375 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-09 04:56:08.122383 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-09 04:56:08.122392 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-09 04:56:08.122409 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-09 04:56:08.122418 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-09T04:54:25.000000 | 2026-02-09 04:56:08.122433 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-09 04:56:08.122447 | orchestrator | | accessIPv4 | | 2026-02-09 04:56:08.122457 | orchestrator | | accessIPv6 | | 2026-02-09 04:56:08.122466 | orchestrator | | addresses | test=192.168.112.166, 192.168.200.13 | 2026-02-09 04:56:08.122474 | orchestrator | | config_drive | | 2026-02-09 04:56:08.122483 | orchestrator | | created | 2026-02-09T04:53:58Z | 2026-02-09 04:56:08.122492 | orchestrator | | description | None | 2026-02-09 04:56:08.122509 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-09 04:56:08.122518 | orchestrator | | hostId | 86f43cb0a8cdcb5bd6531c68b850017529b318865f523aa1970c3fc2 | 2026-02-09 04:56:08.122527 | orchestrator | | host_status | None | 2026-02-09 04:56:08.122542 | orchestrator | | id | c50dedd3-a7dd-4cba-b957-47fe10f077e5 | 2026-02-09 04:56:08.122555 | orchestrator | | image | N/A (booted from volume) | 2026-02-09 04:56:08.122565 | orchestrator | | key_name | test | 2026-02-09 04:56:08.122574 | orchestrator | | locked | False | 2026-02-09 04:56:08.122582 | orchestrator | | locked_reason | None | 2026-02-09 04:56:08.122591 | orchestrator | | name | test-4 | 2026-02-09 04:56:08.122608 | orchestrator | | pinned_availability_zone | None | 2026-02-09 04:56:08.122617 | orchestrator | | progress | 0 | 2026-02-09 04:56:08.122625 | orchestrator | | project_id | c4517bac21914a46bd4ada79fdef4cd5 | 2026-02-09 04:56:08.122634 | orchestrator | | properties | hostname='test-4' | 2026-02-09 04:56:08.122650 | orchestrator | | security_groups | name='icmp' | 2026-02-09 04:56:08.122665 | orchestrator | | | name='ssh' | 2026-02-09 04:56:08.122674 | orchestrator | | server_groups | None | 2026-02-09 04:56:08.122683 | orchestrator | | status | ACTIVE | 2026-02-09 04:56:08.122692 | orchestrator | | tags | test | 2026-02-09 04:56:08.122706 | orchestrator | | trusted_image_certificates | None | 2026-02-09 04:56:08.122715 | orchestrator | | updated | 2026-02-09T04:54:58Z | 2026-02-09 04:56:08.122724 | orchestrator | | user_id | 03249ec1dd6e42168653d0f815e14472 | 2026-02-09 04:56:08.122733 | orchestrator | | volumes_attached | delete_on_termination='True', id='05d7a335-aaaa-44c7-a47f-cfae66db0e38' | 2026-02-09 04:56:08.127584 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-09 04:56:08.461438 | orchestrator | + server_ping 2026-02-09 04:56:08.463448 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-09 04:56:08.463546 | orchestrator | ++ tr -d '\r' 2026-02-09 04:56:11.474733 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-09 04:56:11.474837 | orchestrator | + ping -c3 192.168.112.198 2026-02-09 04:56:11.487759 | orchestrator | PING 192.168.112.198 (192.168.112.198) 56(84) bytes of data. 2026-02-09 04:56:11.487826 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=1 ttl=63 time=6.58 ms 2026-02-09 04:56:12.485434 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=2 ttl=63 time=2.27 ms 2026-02-09 04:56:13.486543 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=3 ttl=63 time=1.52 ms 2026-02-09 04:56:13.486638 | orchestrator | 2026-02-09 04:56:13.486652 | orchestrator | --- 192.168.112.198 ping statistics --- 2026-02-09 04:56:13.486661 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-09 04:56:13.486668 | orchestrator | rtt min/avg/max/mdev = 1.523/3.458/6.583/2.230 ms 2026-02-09 04:56:13.486924 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-09 04:56:13.486937 | orchestrator | + ping -c3 192.168.112.184 2026-02-09 04:56:13.497572 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2026-02-09 04:56:13.497612 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=7.24 ms 2026-02-09 04:56:14.494477 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.67 ms 2026-02-09 04:56:15.495759 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=1.76 ms 2026-02-09 04:56:15.495885 | orchestrator | 2026-02-09 04:56:15.495912 | orchestrator | --- 192.168.112.184 ping statistics --- 2026-02-09 04:56:15.495934 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-09 04:56:15.495955 | orchestrator | rtt min/avg/max/mdev = 1.757/3.887/7.239/2.399 ms 2026-02-09 04:56:15.496012 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-09 04:56:15.496033 | orchestrator | + ping -c3 192.168.112.162 2026-02-09 04:56:15.503782 | orchestrator | PING 192.168.112.162 (192.168.112.162) 56(84) bytes of data. 2026-02-09 04:56:15.503884 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=1 ttl=63 time=5.47 ms 2026-02-09 04:56:16.503136 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=2 ttl=63 time=2.38 ms 2026-02-09 04:56:17.504417 | orchestrator | 64 bytes from 192.168.112.162: icmp_seq=3 ttl=63 time=1.71 ms 2026-02-09 04:56:17.504519 | orchestrator | 2026-02-09 04:56:17.504535 | orchestrator | --- 192.168.112.162 ping statistics --- 2026-02-09 04:56:17.504546 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-09 04:56:17.504636 | orchestrator | rtt min/avg/max/mdev = 1.710/3.186/5.465/1.634 ms 2026-02-09 04:56:17.505255 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-09 04:56:17.505290 | orchestrator | + ping -c3 192.168.112.173 2026-02-09 04:56:17.516746 | orchestrator | PING 192.168.112.173 (192.168.112.173) 56(84) bytes of data. 2026-02-09 04:56:17.516815 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=1 ttl=63 time=6.75 ms 2026-02-09 04:56:18.514457 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=2 ttl=63 time=2.64 ms 2026-02-09 04:56:19.516907 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=3 ttl=63 time=2.57 ms 2026-02-09 04:56:19.517019 | orchestrator | 2026-02-09 04:56:19.517036 | orchestrator | --- 192.168.112.173 ping statistics --- 2026-02-09 04:56:19.517049 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-09 04:56:19.517060 | orchestrator | rtt min/avg/max/mdev = 2.566/3.986/6.752/1.955 ms 2026-02-09 04:56:19.518161 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-09 04:56:19.518197 | orchestrator | + ping -c3 192.168.112.166 2026-02-09 04:56:19.532552 | orchestrator | PING 192.168.112.166 (192.168.112.166) 56(84) bytes of data. 2026-02-09 04:56:19.532595 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=1 ttl=63 time=8.51 ms 2026-02-09 04:56:20.528353 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=2 ttl=63 time=2.45 ms 2026-02-09 04:56:21.529707 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=3 ttl=63 time=2.03 ms 2026-02-09 04:56:21.529809 | orchestrator | 2026-02-09 04:56:21.529824 | orchestrator | --- 192.168.112.166 ping statistics --- 2026-02-09 04:56:21.529836 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-09 04:56:21.529848 | orchestrator | rtt min/avg/max/mdev = 2.029/4.326/8.506/2.960 ms 2026-02-09 04:56:21.531214 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-09 04:56:21.984798 | orchestrator | ok: Runtime: 0:08:26.156320 2026-02-09 04:56:22.037766 | 2026-02-09 04:56:22.037931 | TASK [Run tempest] 2026-02-09 04:56:22.573268 | orchestrator | skipping: Conditional result was False 2026-02-09 04:56:22.591443 | 2026-02-09 04:56:22.591594 | TASK [Check prometheus alert status] 2026-02-09 04:56:23.127282 | orchestrator | skipping: Conditional result was False 2026-02-09 04:56:23.142698 | 2026-02-09 04:56:23.142877 | PLAY [Upgrade testbed] 2026-02-09 04:56:23.154430 | 2026-02-09 04:56:23.154553 | TASK [Print next ceph version] 2026-02-09 04:56:23.234062 | orchestrator | ok 2026-02-09 04:56:23.244255 | 2026-02-09 04:56:23.244386 | TASK [Print next openstack version] 2026-02-09 04:56:23.323031 | orchestrator | ok 2026-02-09 04:56:23.335892 | 2026-02-09 04:56:23.336036 | TASK [Print next manager version] 2026-02-09 04:56:23.414912 | orchestrator | ok 2026-02-09 04:56:23.425350 | 2026-02-09 04:56:23.425478 | TASK [Set cloud fact (Zuul deployment)] 2026-02-09 04:56:23.471040 | orchestrator | ok 2026-02-09 04:56:23.481723 | 2026-02-09 04:56:23.481846 | TASK [Set cloud fact (local deployment)] 2026-02-09 04:56:23.516457 | orchestrator | skipping: Conditional result was False 2026-02-09 04:56:23.532829 | 2026-02-09 04:56:23.532964 | TASK [Fetch manager address] 2026-02-09 04:56:23.797281 | orchestrator | ok 2026-02-09 04:56:23.807398 | 2026-02-09 04:56:23.807520 | TASK [Set manager_host address] 2026-02-09 04:56:23.876220 | orchestrator | ok 2026-02-09 04:56:23.887350 | 2026-02-09 04:56:23.887478 | TASK [Run upgrade] 2026-02-09 04:56:24.560710 | orchestrator | + set -e 2026-02-09 04:56:24.560891 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-09 04:56:24.560904 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-09 04:56:24.560915 | orchestrator | + CEPH_VERSION=reef 2026-02-09 04:56:24.560921 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-09 04:56:24.560926 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-09 04:56:24.560937 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-09 04:56:24.570270 | orchestrator | + set -e 2026-02-09 04:56:24.570337 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 04:56:24.570344 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 04:56:24.570354 | orchestrator | ++ INTERACTIVE=false 2026-02-09 04:56:24.570359 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 04:56:24.570369 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 04:56:24.571799 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-09 04:56:24.613359 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-09 04:56:24.614134 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-09 04:56:24.654552 | orchestrator | 2026-02-09 04:56:24.654669 | orchestrator | # UPGRADE MANAGER 2026-02-09 04:56:24.654679 | orchestrator | 2026-02-09 04:56:24.654684 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-09 04:56:24.654689 | orchestrator | + echo 2026-02-09 04:56:24.654694 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-09 04:56:24.654701 | orchestrator | + echo 2026-02-09 04:56:24.654705 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-09 04:56:24.654711 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-09 04:56:24.654716 | orchestrator | + CEPH_VERSION=reef 2026-02-09 04:56:24.654720 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-09 04:56:24.654725 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-09 04:56:24.654738 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-09 04:56:24.662220 | orchestrator | + set -e 2026-02-09 04:56:24.662576 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-09 04:56:24.662604 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-09 04:56:24.667331 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-09 04:56:24.667381 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-09 04:56:24.672274 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-09 04:56:24.677338 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-09 04:56:24.687500 | orchestrator | /opt/configuration ~ 2026-02-09 04:56:24.687564 | orchestrator | + set -e 2026-02-09 04:56:24.687573 | orchestrator | + pushd /opt/configuration 2026-02-09 04:56:24.687581 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-09 04:56:24.687591 | orchestrator | + source /opt/venv/bin/activate 2026-02-09 04:56:24.688941 | orchestrator | ++ deactivate nondestructive 2026-02-09 04:56:24.688981 | orchestrator | ++ '[' -n '' ']' 2026-02-09 04:56:24.688989 | orchestrator | ++ '[' -n '' ']' 2026-02-09 04:56:24.688996 | orchestrator | ++ hash -r 2026-02-09 04:56:24.689057 | orchestrator | ++ '[' -n '' ']' 2026-02-09 04:56:24.689065 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-09 04:56:24.689071 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-09 04:56:24.689079 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-09 04:56:24.689087 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-09 04:56:24.689093 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-09 04:56:24.689137 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-09 04:56:24.689146 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-09 04:56:24.689155 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 04:56:24.689378 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 04:56:24.689390 | orchestrator | ++ export PATH 2026-02-09 04:56:24.689398 | orchestrator | ++ '[' -n '' ']' 2026-02-09 04:56:24.689634 | orchestrator | ++ '[' -z '' ']' 2026-02-09 04:56:24.689644 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-09 04:56:24.689651 | orchestrator | ++ PS1='(venv) ' 2026-02-09 04:56:24.689658 | orchestrator | ++ export PS1 2026-02-09 04:56:24.689665 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-09 04:56:24.689672 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-09 04:56:24.689679 | orchestrator | ++ hash -r 2026-02-09 04:56:24.689814 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-09 04:56:25.967497 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-09 04:56:25.969761 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-09 04:56:25.972302 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-09 04:56:25.974593 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-09 04:56:25.976451 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-09 04:56:25.987947 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-09 04:56:25.989631 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-09 04:56:25.990970 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-09 04:56:25.992625 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-09 04:56:26.040979 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-09 04:56:26.042632 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-09 04:56:26.044724 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-09 04:56:26.046380 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-09 04:56:26.050491 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-09 04:56:26.331200 | orchestrator | ++ which gilt 2026-02-09 04:56:26.333020 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-09 04:56:26.333037 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-09 04:56:26.593976 | orchestrator | osism.cfg-generics: 2026-02-09 04:56:26.702976 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-09 04:56:26.704186 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-09 04:56:26.705576 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-09 04:56:26.705596 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-09 04:56:27.618359 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-09 04:56:27.630421 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-09 04:56:28.093780 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-09 04:56:28.155185 | orchestrator | ~ 2026-02-09 04:56:28.155322 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-09 04:56:28.155332 | orchestrator | + deactivate 2026-02-09 04:56:28.155337 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-09 04:56:28.155344 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 04:56:28.155348 | orchestrator | + export PATH 2026-02-09 04:56:28.155353 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-09 04:56:28.155357 | orchestrator | + '[' -n '' ']' 2026-02-09 04:56:28.155361 | orchestrator | + hash -r 2026-02-09 04:56:28.155365 | orchestrator | + '[' -n '' ']' 2026-02-09 04:56:28.155369 | orchestrator | + unset VIRTUAL_ENV 2026-02-09 04:56:28.155373 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-09 04:56:28.155377 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-09 04:56:28.155381 | orchestrator | + unset -f deactivate 2026-02-09 04:56:28.155385 | orchestrator | + popd 2026-02-09 04:56:28.156568 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-09 04:56:28.156601 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-09 04:56:28.163872 | orchestrator | + set -e 2026-02-09 04:56:28.163886 | orchestrator | + NAMESPACE=kolla/release 2026-02-09 04:56:28.163892 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-09 04:56:28.169698 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-09 04:56:28.175355 | orchestrator | /opt/configuration ~ 2026-02-09 04:56:28.175371 | orchestrator | + set -e 2026-02-09 04:56:28.175376 | orchestrator | + pushd /opt/configuration 2026-02-09 04:56:28.175381 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-09 04:56:28.175386 | orchestrator | + source /opt/venv/bin/activate 2026-02-09 04:56:28.175390 | orchestrator | ++ deactivate nondestructive 2026-02-09 04:56:28.175394 | orchestrator | ++ '[' -n '' ']' 2026-02-09 04:56:28.175398 | orchestrator | ++ '[' -n '' ']' 2026-02-09 04:56:28.175402 | orchestrator | ++ hash -r 2026-02-09 04:56:28.175406 | orchestrator | ++ '[' -n '' ']' 2026-02-09 04:56:28.175410 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-09 04:56:28.175414 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-09 04:56:28.175418 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-09 04:56:28.175423 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-09 04:56:28.175427 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-09 04:56:28.175431 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-09 04:56:28.175440 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-09 04:56:28.175444 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 04:56:28.175449 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 04:56:28.175453 | orchestrator | ++ export PATH 2026-02-09 04:56:28.175459 | orchestrator | ++ '[' -n '' ']' 2026-02-09 04:56:28.175463 | orchestrator | ++ '[' -z '' ']' 2026-02-09 04:56:28.175467 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-09 04:56:28.175470 | orchestrator | ++ PS1='(venv) ' 2026-02-09 04:56:28.175474 | orchestrator | ++ export PS1 2026-02-09 04:56:28.175478 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-09 04:56:28.175482 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-09 04:56:28.175486 | orchestrator | ++ hash -r 2026-02-09 04:56:28.175490 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-09 04:56:28.733314 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-09 04:56:28.734695 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-09 04:56:28.736976 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-09 04:56:28.738929 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-09 04:56:28.740527 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-09 04:56:28.753030 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-09 04:56:28.756289 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-09 04:56:28.756351 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-09 04:56:28.758110 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-09 04:56:28.803100 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-09 04:56:28.805299 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-09 04:56:28.807487 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-09 04:56:28.809118 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-09 04:56:28.813577 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-09 04:56:29.078805 | orchestrator | ++ which gilt 2026-02-09 04:56:29.080370 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-09 04:56:29.080383 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-09 04:56:29.268345 | orchestrator | osism.cfg-generics: 2026-02-09 04:56:29.345861 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-09 04:56:29.346932 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-09 04:56:29.346953 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-09 04:56:29.346962 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-09 04:56:29.899225 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-09 04:56:29.910268 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-09 04:56:30.303124 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-09 04:56:30.382634 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-09 04:56:30.382738 | orchestrator | + deactivate 2026-02-09 04:56:30.382779 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-09 04:56:30.382793 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-09 04:56:30.382804 | orchestrator | + export PATH 2026-02-09 04:56:30.382816 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-09 04:56:30.382827 | orchestrator | + '[' -n '' ']' 2026-02-09 04:56:30.382837 | orchestrator | + hash -r 2026-02-09 04:56:30.382848 | orchestrator | + '[' -n '' ']' 2026-02-09 04:56:30.382859 | orchestrator | + unset VIRTUAL_ENV 2026-02-09 04:56:30.382871 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-09 04:56:30.382883 | orchestrator | ~ 2026-02-09 04:56:30.382894 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-09 04:56:30.382906 | orchestrator | + unset -f deactivate 2026-02-09 04:56:30.382917 | orchestrator | + popd 2026-02-09 04:56:30.384665 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-09 04:56:30.433457 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-09 04:56:30.433934 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-09 04:56:30.516827 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-09 04:56:30.516931 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-09 04:56:30.523305 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-09 04:56:30.531520 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-09 04:56:30.582328 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-09 04:56:30.583744 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-09 04:56:30.700591 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-09 04:56:30.700695 | orchestrator | ++ echo true 2026-02-09 04:56:30.701651 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-09 04:56:30.703603 | orchestrator | +++ semver 2024.2 2024.2 2026-02-09 04:56:30.804472 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-09 04:56:30.805067 | orchestrator | +++ semver 2024.2 2025.1 2026-02-09 04:56:30.874612 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-09 04:56:30.874713 | orchestrator | ++ echo false 2026-02-09 04:56:30.875008 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-09 04:56:30.875053 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-09 04:56:30.875282 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-09 04:56:30.875349 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-09 04:56:30.875367 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-09 04:56:30.881672 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-09 04:56:30.882308 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-09 04:56:30.902313 | orchestrator | export RABBITMQ3TO4=true 2026-02-09 04:56:30.905768 | orchestrator | + osism update manager 2026-02-09 04:56:37.269724 | orchestrator | Collecting uv 2026-02-09 04:56:37.363457 | orchestrator | Downloading uv-0.10.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-09 04:56:37.379968 | orchestrator | Downloading uv-0.10.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (22.8 MB) 2026-02-09 04:56:38.398369 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.8/22.8 MB 16.3 MB/s eta 0:00:00 2026-02-09 04:56:38.451294 | orchestrator | Installing collected packages: uv 2026-02-09 04:56:38.905162 | orchestrator | Successfully installed uv-0.10.0 2026-02-09 04:56:39.604918 | orchestrator | Resolved 11 packages in 307ms 2026-02-09 04:56:39.620408 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-09 04:56:39.653358 | orchestrator | Downloading cryptography (4.2MiB) 2026-02-09 04:56:39.653438 | orchestrator | Downloading ansible (54.5MiB) 2026-02-09 04:56:39.653448 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-09 04:56:40.219481 | orchestrator | Downloaded netaddr 2026-02-09 04:56:40.237813 | orchestrator | Downloaded ansible-core 2026-02-09 04:56:40.463375 | orchestrator | Downloaded cryptography 2026-02-09 04:56:48.469113 | orchestrator | Downloaded ansible 2026-02-09 04:56:48.469218 | orchestrator | Prepared 11 packages in 8.86s 2026-02-09 04:56:48.983710 | orchestrator | Installed 11 packages in 511ms 2026-02-09 04:56:48.983802 | orchestrator | + ansible==11.11.0 2026-02-09 04:56:48.983813 | orchestrator | + ansible-core==2.18.13 2026-02-09 04:56:48.983820 | orchestrator | + cffi==2.0.0 2026-02-09 04:56:48.983827 | orchestrator | + cryptography==46.0.4 2026-02-09 04:56:48.983833 | orchestrator | + jinja2==3.1.6 2026-02-09 04:56:48.983839 | orchestrator | + markupsafe==3.0.3 2026-02-09 04:56:48.983845 | orchestrator | + netaddr==1.3.0 2026-02-09 04:56:48.983851 | orchestrator | + packaging==26.0 2026-02-09 04:56:48.983856 | orchestrator | + pycparser==3.0 2026-02-09 04:56:48.983862 | orchestrator | + pyyaml==6.0.3 2026-02-09 04:56:48.983868 | orchestrator | + resolvelib==1.0.1 2026-02-09 04:56:50.202206 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-199003qfd4n1vk/tmpodklmaki/ansible-collection-servicesv2mzupb_'... 2026-02-09 04:56:51.685787 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-09 04:56:51.685882 | orchestrator | Already on 'main' 2026-02-09 04:56:52.191357 | orchestrator | Starting galaxy collection install process 2026-02-09 04:56:52.191459 | orchestrator | Process install dependency map 2026-02-09 04:56:52.191476 | orchestrator | Starting collection install process 2026-02-09 04:56:52.191489 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-09 04:56:52.191503 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-09 04:56:52.191514 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-09 04:56:52.759715 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-1990298ozr8xla/tmpms1ald0_/ansible-playbooks-manager99d1cbkb'... 2026-02-09 04:56:53.332345 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-09 04:56:53.332446 | orchestrator | Already on 'main' 2026-02-09 04:56:53.623145 | orchestrator | Starting galaxy collection install process 2026-02-09 04:56:53.623324 | orchestrator | Process install dependency map 2026-02-09 04:56:53.623346 | orchestrator | Starting collection install process 2026-02-09 04:56:53.623360 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-09 04:56:53.623411 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-09 04:56:53.623424 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-09 04:56:54.393467 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-09 04:56:54.393564 | orchestrator | -vvvv to see details 2026-02-09 04:56:54.816885 | orchestrator | 2026-02-09 04:56:54.816993 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-09 04:56:54.817013 | orchestrator | 2026-02-09 04:56:54.817028 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-09 04:56:59.351208 | orchestrator | ok: [testbed-manager] 2026-02-09 04:56:59.351450 | orchestrator | 2026-02-09 04:56:59.351489 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-09 04:56:59.440340 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-09 04:56:59.440507 | orchestrator | 2026-02-09 04:56:59.440567 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-09 04:57:01.628639 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:01.628729 | orchestrator | 2026-02-09 04:57:01.628742 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-09 04:57:01.702683 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:01.702784 | orchestrator | 2026-02-09 04:57:01.702801 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-09 04:57:01.815686 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-09 04:57:01.815781 | orchestrator | 2026-02-09 04:57:01.815796 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-09 04:57:06.469951 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-09 04:57:06.470155 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-09 04:57:06.470173 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-09 04:57:06.470201 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-09 04:57:06.470213 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-09 04:57:06.470224 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-09 04:57:06.470235 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-09 04:57:06.470247 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-09 04:57:06.470258 | orchestrator | 2026-02-09 04:57:06.470271 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-09 04:57:07.697585 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:07.697731 | orchestrator | 2026-02-09 04:57:07.697747 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-09 04:57:08.759465 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:08.759606 | orchestrator | 2026-02-09 04:57:08.759622 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-09 04:57:08.853144 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-09 04:57:08.853275 | orchestrator | 2026-02-09 04:57:08.853336 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-09 04:57:10.876207 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-09 04:57:10.876349 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-09 04:57:10.876363 | orchestrator | 2026-02-09 04:57:10.876374 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-09 04:57:11.881633 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:11.881771 | orchestrator | 2026-02-09 04:57:11.881790 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-09 04:57:11.949421 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:57:11.949511 | orchestrator | 2026-02-09 04:57:11.949528 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-09 04:57:12.056367 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-09 04:57:12.056462 | orchestrator | 2026-02-09 04:57:12.056477 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-09 04:57:13.110511 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:13.110643 | orchestrator | 2026-02-09 04:57:13.110663 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-09 04:57:13.192255 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-09 04:57:13.192391 | orchestrator | 2026-02-09 04:57:13.192417 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-09 04:57:15.302898 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-09 04:57:15.303020 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-09 04:57:15.303031 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:15.303042 | orchestrator | 2026-02-09 04:57:15.303052 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-09 04:57:16.384036 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:16.384165 | orchestrator | 2026-02-09 04:57:16.384182 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-09 04:57:16.455078 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:57:16.455209 | orchestrator | 2026-02-09 04:57:16.455225 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-09 04:57:16.583389 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-09 04:57:16.583504 | orchestrator | 2026-02-09 04:57:16.583515 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-09 04:57:17.368828 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:17.368957 | orchestrator | 2026-02-09 04:57:17.368973 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-09 04:57:18.041503 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:18.041620 | orchestrator | 2026-02-09 04:57:18.041633 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-09 04:57:20.150246 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-09 04:57:20.150395 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-09 04:57:20.150404 | orchestrator | 2026-02-09 04:57:20.150411 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-09 04:57:21.444018 | orchestrator | changed: [testbed-manager] 2026-02-09 04:57:21.444125 | orchestrator | 2026-02-09 04:57:21.444142 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-09 04:57:22.065396 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:22.065484 | orchestrator | 2026-02-09 04:57:22.065497 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-09 04:57:22.634564 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:22.634667 | orchestrator | 2026-02-09 04:57:22.634706 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-09 04:57:22.710401 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:57:22.710511 | orchestrator | 2026-02-09 04:57:22.710527 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-09 04:57:22.807851 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-09 04:57:22.807944 | orchestrator | 2026-02-09 04:57:22.807960 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-09 04:57:22.881697 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:22.881789 | orchestrator | 2026-02-09 04:57:22.881804 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-09 04:57:26.108542 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-09 04:57:26.108653 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-09 04:57:26.108670 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-09 04:57:26.108683 | orchestrator | 2026-02-09 04:57:26.108695 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-09 04:57:27.194695 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:27.194815 | orchestrator | 2026-02-09 04:57:27.194838 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-09 04:57:28.392454 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:28.392561 | orchestrator | 2026-02-09 04:57:28.392578 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-09 04:57:29.555415 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:29.555522 | orchestrator | 2026-02-09 04:57:29.555540 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-09 04:57:29.664958 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-09 04:57:29.665060 | orchestrator | 2026-02-09 04:57:29.665077 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-09 04:57:29.729628 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:29.729722 | orchestrator | 2026-02-09 04:57:29.729737 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-09 04:57:30.851925 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-09 04:57:30.852050 | orchestrator | 2026-02-09 04:57:30.852069 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-09 04:57:30.950944 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-09 04:57:30.951040 | orchestrator | 2026-02-09 04:57:30.951054 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-09 04:57:32.047748 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:32.047860 | orchestrator | 2026-02-09 04:57:32.047878 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-09 04:57:33.355496 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:33.355573 | orchestrator | 2026-02-09 04:57:33.355582 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-09 04:57:33.443911 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:57:33.444013 | orchestrator | 2026-02-09 04:57:33.444029 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-09 04:57:33.526159 | orchestrator | ok: [testbed-manager] 2026-02-09 04:57:33.526254 | orchestrator | 2026-02-09 04:57:33.526272 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-09 04:57:35.020530 | orchestrator | changed: [testbed-manager] 2026-02-09 04:57:35.020630 | orchestrator | 2026-02-09 04:57:35.020645 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-09 04:58:47.633248 | orchestrator | changed: [testbed-manager] 2026-02-09 04:58:47.633449 | orchestrator | 2026-02-09 04:58:47.633468 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-09 04:58:49.003783 | orchestrator | ok: [testbed-manager] 2026-02-09 04:58:49.003937 | orchestrator | 2026-02-09 04:58:49.003956 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-09 04:58:49.068928 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:58:49.069051 | orchestrator | 2026-02-09 04:58:49.069069 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-09 04:58:49.941667 | orchestrator | ok: [testbed-manager] 2026-02-09 04:58:49.941791 | orchestrator | 2026-02-09 04:58:49.941806 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-09 04:58:50.025217 | orchestrator | skipping: [testbed-manager] 2026-02-09 04:58:50.025342 | orchestrator | 2026-02-09 04:58:50.025356 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-09 04:58:50.025368 | orchestrator | 2026-02-09 04:58:50.025378 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-09 04:59:05.217214 | orchestrator | changed: [testbed-manager] 2026-02-09 04:59:05.217336 | orchestrator | 2026-02-09 04:59:05.217352 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-09 05:00:05.336319 | orchestrator | Pausing for 60 seconds 2026-02-09 05:00:05.336456 | orchestrator | changed: [testbed-manager] 2026-02-09 05:00:05.336502 | orchestrator | 2026-02-09 05:00:05.336514 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-09 05:00:05.409562 | orchestrator | ok: [testbed-manager] 2026-02-09 05:00:05.409678 | orchestrator | 2026-02-09 05:00:05.409689 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-09 05:00:09.715155 | orchestrator | changed: [testbed-manager] 2026-02-09 05:00:09.715247 | orchestrator | 2026-02-09 05:00:09.715255 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-09 05:01:12.869069 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-09 05:01:12.869208 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-09 05:01:12.869224 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-09 05:01:12.869236 | orchestrator | changed: [testbed-manager] 2026-02-09 05:01:12.869249 | orchestrator | 2026-02-09 05:01:12.869261 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-09 05:01:26.936121 | orchestrator | changed: [testbed-manager] 2026-02-09 05:01:26.936260 | orchestrator | 2026-02-09 05:01:26.936275 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-09 05:01:27.027127 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-09 05:01:27.027267 | orchestrator | 2026-02-09 05:01:27.027278 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-09 05:01:27.027286 | orchestrator | 2026-02-09 05:01:27.027292 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-09 05:01:27.092332 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:01:27.092438 | orchestrator | 2026-02-09 05:01:27.092448 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-09 05:01:27.164717 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-09 05:01:27.164831 | orchestrator | 2026-02-09 05:01:27.164871 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-09 05:01:28.432062 | orchestrator | changed: [testbed-manager] 2026-02-09 05:01:28.432189 | orchestrator | 2026-02-09 05:01:28.432206 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-09 05:01:32.249051 | orchestrator | ok: [testbed-manager] 2026-02-09 05:01:32.249170 | orchestrator | 2026-02-09 05:01:32.249181 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-09 05:01:32.332872 | orchestrator | ok: [testbed-manager] => { 2026-02-09 05:01:32.332978 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-09 05:01:32.332990 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-09 05:01:32.332999 | orchestrator | "Checking running containers against expected versions...", 2026-02-09 05:01:32.333008 | orchestrator | "", 2026-02-09 05:01:32.333016 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-09 05:01:32.333024 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-09 05:01:32.333032 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333040 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-09 05:01:32.333048 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333055 | orchestrator | "", 2026-02-09 05:01:32.333063 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-09 05:01:32.333071 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-09 05:01:32.333078 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333086 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-09 05:01:32.333093 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333101 | orchestrator | "", 2026-02-09 05:01:32.333108 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-09 05:01:32.333116 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-09 05:01:32.333123 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333130 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-09 05:01:32.333137 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333145 | orchestrator | "", 2026-02-09 05:01:32.333152 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-09 05:01:32.333160 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-09 05:01:32.333167 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333174 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-09 05:01:32.333182 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333189 | orchestrator | "", 2026-02-09 05:01:32.333197 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-09 05:01:32.333204 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-09 05:01:32.333211 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333219 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-09 05:01:32.333226 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333233 | orchestrator | "", 2026-02-09 05:01:32.333240 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-09 05:01:32.333271 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333278 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333286 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333293 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333300 | orchestrator | "", 2026-02-09 05:01:32.333307 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-09 05:01:32.333315 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-09 05:01:32.333322 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333329 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-09 05:01:32.333336 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333343 | orchestrator | "", 2026-02-09 05:01:32.333350 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-09 05:01:32.333358 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-09 05:01:32.333365 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333381 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-09 05:01:32.333389 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333396 | orchestrator | "", 2026-02-09 05:01:32.333403 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-09 05:01:32.333410 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-09 05:01:32.333418 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333425 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-09 05:01:32.333432 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333439 | orchestrator | "", 2026-02-09 05:01:32.333451 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-09 05:01:32.333459 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-09 05:01:32.333467 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333474 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-09 05:01:32.333481 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333488 | orchestrator | "", 2026-02-09 05:01:32.333495 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-09 05:01:32.333502 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333510 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333517 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333524 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333531 | orchestrator | "", 2026-02-09 05:01:32.333538 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-09 05:01:32.333569 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333577 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333584 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333592 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333599 | orchestrator | "", 2026-02-09 05:01:32.333606 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-09 05:01:32.333614 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333621 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333628 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333635 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333642 | orchestrator | "", 2026-02-09 05:01:32.333650 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-09 05:01:32.333657 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333664 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333671 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333695 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333703 | orchestrator | "", 2026-02-09 05:01:32.333710 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-09 05:01:32.333718 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333731 | orchestrator | " Enabled: true", 2026-02-09 05:01:32.333739 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-09 05:01:32.333746 | orchestrator | " Status: ✅ MATCH", 2026-02-09 05:01:32.333753 | orchestrator | "", 2026-02-09 05:01:32.333760 | orchestrator | "=== Summary ===", 2026-02-09 05:01:32.333767 | orchestrator | "Errors (version mismatches): 0", 2026-02-09 05:01:32.333775 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-09 05:01:32.333782 | orchestrator | "", 2026-02-09 05:01:32.333789 | orchestrator | "✅ All running containers match expected versions!" 2026-02-09 05:01:32.333797 | orchestrator | ] 2026-02-09 05:01:32.333804 | orchestrator | } 2026-02-09 05:01:32.333812 | orchestrator | 2026-02-09 05:01:32.333820 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-09 05:01:32.406828 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:01:32.406950 | orchestrator | 2026-02-09 05:01:32.406964 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:01:32.406978 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-09 05:01:32.406989 | orchestrator | 2026-02-09 05:01:45.356735 | orchestrator | 2026-02-09 05:01:45 | INFO  | Task 7c07e916-5658-45fd-bb12-f7093a4efc14 (sync inventory) is running in background. Output coming soon. 2026-02-09 05:02:18.146130 | orchestrator | 2026-02-09 05:01:47 | INFO  | Starting group_vars file reorganization 2026-02-09 05:02:18.146246 | orchestrator | 2026-02-09 05:01:47 | INFO  | Moved 0 file(s) to their respective directories 2026-02-09 05:02:18.146264 | orchestrator | 2026-02-09 05:01:47 | INFO  | Group_vars file reorganization completed 2026-02-09 05:02:18.146296 | orchestrator | 2026-02-09 05:01:50 | INFO  | Starting variable preparation from inventory 2026-02-09 05:02:18.146309 | orchestrator | 2026-02-09 05:01:53 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-09 05:02:18.146321 | orchestrator | 2026-02-09 05:01:53 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-09 05:02:18.146332 | orchestrator | 2026-02-09 05:01:53 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-09 05:02:18.146343 | orchestrator | 2026-02-09 05:01:53 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-09 05:02:18.146354 | orchestrator | 2026-02-09 05:01:53 | INFO  | Variable preparation completed 2026-02-09 05:02:18.146365 | orchestrator | 2026-02-09 05:01:55 | INFO  | Starting inventory overwrite handling 2026-02-09 05:02:18.146376 | orchestrator | 2026-02-09 05:01:55 | INFO  | Handling group overwrites in 99-overwrite 2026-02-09 05:02:18.146387 | orchestrator | 2026-02-09 05:01:55 | INFO  | Removing group frr:children from 60-generic 2026-02-09 05:02:18.146398 | orchestrator | 2026-02-09 05:01:55 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-09 05:02:18.146409 | orchestrator | 2026-02-09 05:01:55 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-09 05:02:18.146420 | orchestrator | 2026-02-09 05:01:55 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-09 05:02:18.146430 | orchestrator | 2026-02-09 05:01:55 | INFO  | Handling group overwrites in 20-roles 2026-02-09 05:02:18.146441 | orchestrator | 2026-02-09 05:01:55 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-09 05:02:18.146452 | orchestrator | 2026-02-09 05:01:55 | INFO  | Removed 5 group(s) in total 2026-02-09 05:02:18.146463 | orchestrator | 2026-02-09 05:01:55 | INFO  | Inventory overwrite handling completed 2026-02-09 05:02:18.146474 | orchestrator | 2026-02-09 05:01:56 | INFO  | Starting merge of inventory files 2026-02-09 05:02:18.146485 | orchestrator | 2026-02-09 05:01:56 | INFO  | Inventory files merged successfully 2026-02-09 05:02:18.146519 | orchestrator | 2026-02-09 05:02:03 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-09 05:02:18.146530 | orchestrator | 2026-02-09 05:02:16 | INFO  | Successfully wrote ClusterShell configuration 2026-02-09 05:02:18.686366 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-09 05:02:18.686489 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-09 05:02:18.686515 | orchestrator | + local max_attempts=60 2026-02-09 05:02:18.686536 | orchestrator | + local name=kolla-ansible 2026-02-09 05:02:18.686555 | orchestrator | + local attempt_num=1 2026-02-09 05:02:18.686897 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-09 05:02:18.731797 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-09 05:02:18.731894 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-09 05:02:18.731920 | orchestrator | + local max_attempts=60 2026-02-09 05:02:18.731932 | orchestrator | + local name=osism-ansible 2026-02-09 05:02:18.731943 | orchestrator | + local attempt_num=1 2026-02-09 05:02:18.733573 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-09 05:02:18.779258 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-09 05:02:18.779356 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-09 05:02:18.994886 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-09 05:02:18.994994 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-09 05:02:18.995010 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-09 05:02:18.995021 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-09 05:02:18.995038 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-09 05:02:18.995049 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-09 05:02:18.995060 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-09 05:02:18.995071 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-09 05:02:18.995082 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 13 seconds ago 2026-02-09 05:02:18.995092 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-09 05:02:18.995103 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-09 05:02:18.995114 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-09 05:02:18.995125 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-09 05:02:18.995162 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-09 05:02:18.995174 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-09 05:02:18.995185 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-09 05:02:19.000510 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-09 05:02:19.000562 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-09 05:02:19.000577 | orchestrator | + osism apply facts 2026-02-09 05:02:31.619660 | orchestrator | 2026-02-09 05:02:31 | INFO  | Task 1147cc87-9840-4c8d-959b-3c16d4801b05 (facts) was prepared for execution. 2026-02-09 05:02:31.619765 | orchestrator | 2026-02-09 05:02:31 | INFO  | It takes a moment until task 1147cc87-9840-4c8d-959b-3c16d4801b05 (facts) has been started and output is visible here. 2026-02-09 05:02:56.478486 | orchestrator | 2026-02-09 05:02:56.478695 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-09 05:02:56.478718 | orchestrator | 2026-02-09 05:02:56.478730 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-09 05:02:56.478742 | orchestrator | Monday 09 February 2026 05:02:38 +0000 (0:00:02.256) 0:00:02.256 ******* 2026-02-09 05:02:56.478754 | orchestrator | ok: [testbed-manager] 2026-02-09 05:02:56.478767 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:02:56.478780 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:02:56.478791 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:02:56.478802 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:02:56.478813 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:02:56.478824 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:02:56.478835 | orchestrator | 2026-02-09 05:02:56.478847 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-09 05:02:56.478859 | orchestrator | Monday 09 February 2026 05:02:42 +0000 (0:00:03.774) 0:00:06.030 ******* 2026-02-09 05:02:56.478871 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:02:56.478884 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:02:56.478896 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:02:56.478908 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:02:56.478919 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:02:56.478930 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:02:56.478941 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:02:56.478953 | orchestrator | 2026-02-09 05:02:56.478965 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-09 05:02:56.478976 | orchestrator | 2026-02-09 05:02:56.478986 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-09 05:02:56.478997 | orchestrator | Monday 09 February 2026 05:02:45 +0000 (0:00:02.876) 0:00:08.907 ******* 2026-02-09 05:02:56.479009 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:02:56.479045 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:02:56.479057 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:02:56.479068 | orchestrator | ok: [testbed-manager] 2026-02-09 05:02:56.479084 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:02:56.479095 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:02:56.479105 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:02:56.479117 | orchestrator | 2026-02-09 05:02:56.479127 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-09 05:02:56.479139 | orchestrator | 2026-02-09 05:02:56.479151 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-09 05:02:56.479163 | orchestrator | Monday 09 February 2026 05:02:52 +0000 (0:00:07.503) 0:00:16.410 ******* 2026-02-09 05:02:56.479175 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:02:56.479218 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:02:56.479230 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:02:56.479241 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:02:56.479253 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:02:56.479265 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:02:56.479276 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:02:56.479288 | orchestrator | 2026-02-09 05:02:56.479296 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:02:56.479303 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 05:02:56.479312 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 05:02:56.479319 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 05:02:56.479325 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 05:02:56.479332 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 05:02:56.479338 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 05:02:56.479344 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 05:02:56.479351 | orchestrator | 2026-02-09 05:02:56.479357 | orchestrator | 2026-02-09 05:02:56.479364 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:02:56.479370 | orchestrator | Monday 09 February 2026 05:02:55 +0000 (0:00:02.886) 0:00:19.296 ******* 2026-02-09 05:02:56.479377 | orchestrator | =============================================================================== 2026-02-09 05:02:56.479383 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.50s 2026-02-09 05:02:56.479391 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.77s 2026-02-09 05:02:56.479397 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.89s 2026-02-09 05:02:56.479404 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.88s 2026-02-09 05:02:56.878603 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-09 05:02:56.982897 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-09 05:02:56.983018 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-09 05:02:57.031108 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-09 05:02:57.031219 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-09 05:02:57.039546 | orchestrator | + set -e 2026-02-09 05:02:57.039613 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-09 05:02:57.039638 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-09 05:02:57.049832 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-09 05:02:57.057408 | orchestrator | 2026-02-09 05:02:57.057447 | orchestrator | # UPGRADE SERVICES 2026-02-09 05:02:57.057452 | orchestrator | 2026-02-09 05:02:57.057457 | orchestrator | + set -e 2026-02-09 05:02:57.057461 | orchestrator | + echo 2026-02-09 05:02:57.057466 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-09 05:02:57.057470 | orchestrator | + echo 2026-02-09 05:02:57.057474 | orchestrator | + source /opt/manager-vars.sh 2026-02-09 05:02:57.059106 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-09 05:02:57.059139 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-09 05:02:57.059145 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-09 05:02:57.059150 | orchestrator | ++ CEPH_VERSION=reef 2026-02-09 05:02:57.059154 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-09 05:02:57.059159 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-09 05:02:57.059164 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 05:02:57.059199 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 05:02:57.059206 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-09 05:02:57.059212 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-09 05:02:57.059219 | orchestrator | ++ export ARA=false 2026-02-09 05:02:57.059223 | orchestrator | ++ ARA=false 2026-02-09 05:02:57.059227 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-09 05:02:57.059231 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-09 05:02:57.059235 | orchestrator | ++ export TEMPEST=false 2026-02-09 05:02:57.059239 | orchestrator | ++ TEMPEST=false 2026-02-09 05:02:57.059242 | orchestrator | ++ export IS_ZUUL=true 2026-02-09 05:02:57.059246 | orchestrator | ++ IS_ZUUL=true 2026-02-09 05:02:57.059250 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 05:02:57.059254 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 05:02:57.059258 | orchestrator | ++ export EXTERNAL_API=false 2026-02-09 05:02:57.059261 | orchestrator | ++ EXTERNAL_API=false 2026-02-09 05:02:57.059265 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-09 05:02:57.059269 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-09 05:02:57.059272 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-09 05:02:57.059276 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-09 05:02:57.059279 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-09 05:02:57.059283 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-09 05:02:57.059287 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-09 05:02:57.059290 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-09 05:02:57.059295 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-09 05:02:57.059302 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-09 05:02:57.059308 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-09 05:02:57.065674 | orchestrator | + set -e 2026-02-09 05:02:57.065703 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 05:02:57.066713 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 05:02:57.066751 | orchestrator | ++ INTERACTIVE=false 2026-02-09 05:02:57.066802 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 05:02:57.066808 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 05:02:57.066813 | orchestrator | + source /opt/manager-vars.sh 2026-02-09 05:02:57.066817 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-09 05:02:57.066821 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-09 05:02:57.066862 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-09 05:02:57.066868 | orchestrator | ++ CEPH_VERSION=reef 2026-02-09 05:02:57.066931 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-09 05:02:57.066992 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-09 05:02:57.067136 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-09 05:02:57.067162 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-09 05:02:57.067274 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-09 05:02:57.067288 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-09 05:02:57.067305 | orchestrator | ++ export ARA=false 2026-02-09 05:02:57.067310 | orchestrator | ++ ARA=false 2026-02-09 05:02:57.067315 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-09 05:02:57.067320 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-09 05:02:57.067324 | orchestrator | ++ export TEMPEST=false 2026-02-09 05:02:57.067329 | orchestrator | ++ TEMPEST=false 2026-02-09 05:02:57.067340 | orchestrator | ++ export IS_ZUUL=true 2026-02-09 05:02:57.067345 | orchestrator | ++ IS_ZUUL=true 2026-02-09 05:02:57.067349 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 05:02:57.067354 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.31 2026-02-09 05:02:57.067359 | orchestrator | ++ export EXTERNAL_API=false 2026-02-09 05:02:57.067363 | orchestrator | ++ EXTERNAL_API=false 2026-02-09 05:02:57.067367 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-09 05:02:57.067372 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-09 05:02:57.067377 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-09 05:02:57.067388 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-09 05:02:57.067461 | orchestrator | 2026-02-09 05:02:57.067468 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-09 05:02:57.067472 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-09 05:02:57.067477 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-09 05:02:57.067481 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-09 05:02:57.067486 | orchestrator | + echo 2026-02-09 05:02:57.067490 | orchestrator | # PULL IMAGES 2026-02-09 05:02:57.067495 | orchestrator | + echo '# PULL IMAGES' 2026-02-09 05:02:57.067561 | orchestrator | 2026-02-09 05:02:57.067568 | orchestrator | + echo 2026-02-09 05:02:57.068739 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-09 05:02:57.120685 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-09 05:02:57.120797 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-09 05:02:59.347051 | orchestrator | 2026-02-09 05:02:59 | INFO  | Trying to run play pull-images in environment custom 2026-02-09 05:03:09.564325 | orchestrator | 2026-02-09 05:03:09 | INFO  | Task c8850f82-10ab-4642-b1fc-4ef22a005886 (pull-images) was prepared for execution. 2026-02-09 05:03:09.564465 | orchestrator | 2026-02-09 05:03:09 | INFO  | Task c8850f82-10ab-4642-b1fc-4ef22a005886 is running in background. No more output. Check ARA for logs. 2026-02-09 05:03:09.996787 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-09 05:03:10.007201 | orchestrator | + set -e 2026-02-09 05:03:10.007313 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 05:03:10.007331 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 05:03:10.007343 | orchestrator | ++ INTERACTIVE=false 2026-02-09 05:03:10.007354 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 05:03:10.007434 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 05:03:10.007449 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-09 05:03:10.009916 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-09 05:03:10.019746 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-09 05:03:10.019850 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-09 05:03:10.019903 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-09 05:03:10.071859 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-09 05:03:10.071976 | orchestrator | + osism apply frr 2026-02-09 05:03:22.650991 | orchestrator | 2026-02-09 05:03:22 | INFO  | Task c80ccc10-136e-4b48-9471-9d8c2a98d6d2 (frr) was prepared for execution. 2026-02-09 05:03:22.651099 | orchestrator | 2026-02-09 05:03:22 | INFO  | It takes a moment until task c80ccc10-136e-4b48-9471-9d8c2a98d6d2 (frr) has been started and output is visible here. 2026-02-09 05:03:48.608465 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-09 05:03:48.608785 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-09 05:03:48.608827 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-09 05:03:48.608838 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-09 05:03:48.608861 | orchestrator | 2026-02-09 05:03:48.608873 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-09 05:03:48.608883 | orchestrator | 2026-02-09 05:03:48.608895 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-09 05:03:48.608911 | orchestrator | Monday 09 February 2026 05:03:31 +0000 (0:00:03.924) 0:00:03.925 ******* 2026-02-09 05:03:48.608930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-09 05:03:48.608948 | orchestrator | 2026-02-09 05:03:48.608966 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-09 05:03:48.608986 | orchestrator | Monday 09 February 2026 05:03:34 +0000 (0:00:02.300) 0:00:06.225 ******* 2026-02-09 05:03:48.609006 | orchestrator | ok: [testbed-manager] 2026-02-09 05:03:48.609028 | orchestrator | 2026-02-09 05:03:48.609047 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-09 05:03:48.609064 | orchestrator | Monday 09 February 2026 05:03:35 +0000 (0:00:01.671) 0:00:07.896 ******* 2026-02-09 05:03:48.609076 | orchestrator | ok: [testbed-manager] 2026-02-09 05:03:48.609141 | orchestrator | 2026-02-09 05:03:48.609156 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-09 05:03:48.609169 | orchestrator | Monday 09 February 2026 05:03:38 +0000 (0:00:02.283) 0:00:10.179 ******* 2026-02-09 05:03:48.609181 | orchestrator | ok: [testbed-manager] 2026-02-09 05:03:48.609194 | orchestrator | 2026-02-09 05:03:48.609207 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-09 05:03:48.609220 | orchestrator | Monday 09 February 2026 05:03:39 +0000 (0:00:00.998) 0:00:11.178 ******* 2026-02-09 05:03:48.609286 | orchestrator | ok: [testbed-manager] 2026-02-09 05:03:48.609333 | orchestrator | 2026-02-09 05:03:48.609357 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-09 05:03:48.609383 | orchestrator | Monday 09 February 2026 05:03:40 +0000 (0:00:01.028) 0:00:12.207 ******* 2026-02-09 05:03:48.609505 | orchestrator | ok: [testbed-manager] 2026-02-09 05:03:48.609531 | orchestrator | 2026-02-09 05:03:48.609550 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-09 05:03:48.609568 | orchestrator | Monday 09 February 2026 05:03:41 +0000 (0:00:01.558) 0:00:13.766 ******* 2026-02-09 05:03:48.609655 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:03:48.609679 | orchestrator | 2026-02-09 05:03:48.609698 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-09 05:03:48.609715 | orchestrator | Monday 09 February 2026 05:03:41 +0000 (0:00:00.186) 0:00:13.952 ******* 2026-02-09 05:03:48.609733 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:03:48.609751 | orchestrator | 2026-02-09 05:03:48.609768 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-09 05:03:48.609834 | orchestrator | Monday 09 February 2026 05:03:42 +0000 (0:00:00.190) 0:00:14.143 ******* 2026-02-09 05:03:48.609859 | orchestrator | ok: [testbed-manager] 2026-02-09 05:03:48.609877 | orchestrator | 2026-02-09 05:03:48.609896 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-09 05:03:48.609913 | orchestrator | Monday 09 February 2026 05:03:43 +0000 (0:00:01.034) 0:00:15.178 ******* 2026-02-09 05:03:48.609932 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-09 05:03:48.609974 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-09 05:03:48.609995 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-09 05:03:48.610089 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-09 05:03:48.610118 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-09 05:03:48.610139 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-09 05:03:48.610159 | orchestrator | 2026-02-09 05:03:48.610178 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-09 05:03:48.610198 | orchestrator | Monday 09 February 2026 05:03:46 +0000 (0:00:03.146) 0:00:18.324 ******* 2026-02-09 05:03:48.610217 | orchestrator | ok: [testbed-manager] 2026-02-09 05:03:48.610234 | orchestrator | 2026-02-09 05:03:48.610253 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:03:48.610273 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-09 05:03:48.610292 | orchestrator | 2026-02-09 05:03:48.610311 | orchestrator | 2026-02-09 05:03:48.610331 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:03:48.610350 | orchestrator | Monday 09 February 2026 05:03:48 +0000 (0:00:01.980) 0:00:20.305 ******* 2026-02-09 05:03:48.610425 | orchestrator | =============================================================================== 2026-02-09 05:03:48.610445 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.15s 2026-02-09 05:03:48.610464 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 2.30s 2026-02-09 05:03:48.610510 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.28s 2026-02-09 05:03:48.610530 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.98s 2026-02-09 05:03:48.610548 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.67s 2026-02-09 05:03:48.610567 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.56s 2026-02-09 05:03:48.610635 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.04s 2026-02-09 05:03:48.610666 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.03s 2026-02-09 05:03:48.610677 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.00s 2026-02-09 05:03:48.610688 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.19s 2026-02-09 05:03:48.610742 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.19s 2026-02-09 05:03:49.035142 | orchestrator | + osism apply kubernetes 2026-02-09 05:03:51.559304 | orchestrator | 2026-02-09 05:03:51 | INFO  | Task 93a6331a-32d2-4092-a65d-bf3e80651055 (kubernetes) was prepared for execution. 2026-02-09 05:03:51.559404 | orchestrator | 2026-02-09 05:03:51 | INFO  | It takes a moment until task 93a6331a-32d2-4092-a65d-bf3e80651055 (kubernetes) has been started and output is visible here. 2026-02-09 05:04:39.201323 | orchestrator | 2026-02-09 05:04:39.201516 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-09 05:04:39.201534 | orchestrator | 2026-02-09 05:04:39.201545 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-09 05:04:39.201556 | orchestrator | Monday 09 February 2026 05:03:59 +0000 (0:00:02.332) 0:00:02.332 ******* 2026-02-09 05:04:39.201566 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:04:39.201577 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:04:39.201587 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:04:39.201596 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:04:39.201606 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:04:39.201615 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:04:39.201625 | orchestrator | 2026-02-09 05:04:39.201634 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-09 05:04:39.201644 | orchestrator | Monday 09 February 2026 05:04:03 +0000 (0:00:04.509) 0:00:06.841 ******* 2026-02-09 05:04:39.201654 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:04:39.201665 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:04:39.201675 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:04:39.201684 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:04:39.201693 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:04:39.201703 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:04:39.201712 | orchestrator | 2026-02-09 05:04:39.201722 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-09 05:04:39.201732 | orchestrator | Monday 09 February 2026 05:04:05 +0000 (0:00:01.972) 0:00:08.814 ******* 2026-02-09 05:04:39.201742 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:04:39.201752 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:04:39.201762 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:04:39.201771 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:04:39.201781 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:04:39.201790 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:04:39.201800 | orchestrator | 2026-02-09 05:04:39.201810 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-09 05:04:39.201822 | orchestrator | Monday 09 February 2026 05:04:07 +0000 (0:00:02.005) 0:00:10.820 ******* 2026-02-09 05:04:39.201833 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:04:39.201844 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:04:39.201856 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:04:39.201867 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:04:39.201879 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:04:39.201890 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:04:39.201901 | orchestrator | 2026-02-09 05:04:39.201912 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-09 05:04:39.201924 | orchestrator | Monday 09 February 2026 05:04:10 +0000 (0:00:02.680) 0:00:13.500 ******* 2026-02-09 05:04:39.201936 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:04:39.201948 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:04:39.201959 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:04:39.201971 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:04:39.202073 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:04:39.202088 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:04:39.202100 | orchestrator | 2026-02-09 05:04:39.202112 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-09 05:04:39.202124 | orchestrator | Monday 09 February 2026 05:04:13 +0000 (0:00:02.788) 0:00:16.289 ******* 2026-02-09 05:04:39.202135 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:04:39.202146 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:04:39.202157 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:04:39.202169 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:04:39.202181 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:04:39.202192 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:04:39.202201 | orchestrator | 2026-02-09 05:04:39.202211 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-09 05:04:39.202221 | orchestrator | Monday 09 February 2026 05:04:15 +0000 (0:00:02.712) 0:00:19.001 ******* 2026-02-09 05:04:39.202231 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:04:39.202240 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:04:39.202250 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:04:39.202259 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:04:39.202269 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:04:39.202278 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:04:39.202288 | orchestrator | 2026-02-09 05:04:39.202297 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-09 05:04:39.202307 | orchestrator | Monday 09 February 2026 05:04:17 +0000 (0:00:02.245) 0:00:21.247 ******* 2026-02-09 05:04:39.202316 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:04:39.202326 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:04:39.202335 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:04:39.202345 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:04:39.202367 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:04:39.202377 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:04:39.202387 | orchestrator | 2026-02-09 05:04:39.202414 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-09 05:04:39.202424 | orchestrator | Monday 09 February 2026 05:04:19 +0000 (0:00:01.903) 0:00:23.150 ******* 2026-02-09 05:04:39.202434 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 05:04:39.202443 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 05:04:39.202453 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:04:39.202462 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 05:04:39.202472 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 05:04:39.202482 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:04:39.202491 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 05:04:39.202501 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 05:04:39.202510 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:04:39.202520 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 05:04:39.202530 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 05:04:39.202539 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:04:39.202568 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 05:04:39.202579 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 05:04:39.202589 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:04:39.202598 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-09 05:04:39.202608 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-09 05:04:39.202617 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:04:39.202627 | orchestrator | 2026-02-09 05:04:39.202637 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-09 05:04:39.202657 | orchestrator | Monday 09 February 2026 05:04:22 +0000 (0:00:02.237) 0:00:25.387 ******* 2026-02-09 05:04:39.202667 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:04:39.202676 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:04:39.202686 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:04:39.202695 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:04:39.202705 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:04:39.202714 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:04:39.202724 | orchestrator | 2026-02-09 05:04:39.202734 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-09 05:04:39.202745 | orchestrator | Monday 09 February 2026 05:04:24 +0000 (0:00:02.168) 0:00:27.556 ******* 2026-02-09 05:04:39.202755 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:04:39.202764 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:04:39.202774 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:04:39.202784 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:04:39.202794 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:04:39.202803 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:04:39.202813 | orchestrator | 2026-02-09 05:04:39.202823 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-09 05:04:39.202832 | orchestrator | Monday 09 February 2026 05:04:26 +0000 (0:00:02.171) 0:00:29.728 ******* 2026-02-09 05:04:39.202842 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:04:39.202851 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:04:39.202861 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:04:39.202870 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:04:39.202880 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:04:39.202889 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:04:39.202898 | orchestrator | 2026-02-09 05:04:39.202908 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-09 05:04:39.202918 | orchestrator | Monday 09 February 2026 05:04:29 +0000 (0:00:02.887) 0:00:32.616 ******* 2026-02-09 05:04:39.202927 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:04:39.202937 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:04:39.202946 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:04:39.202956 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:04:39.202966 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:04:39.202975 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:04:39.202985 | orchestrator | 2026-02-09 05:04:39.202994 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-09 05:04:39.203004 | orchestrator | Monday 09 February 2026 05:04:31 +0000 (0:00:02.397) 0:00:35.013 ******* 2026-02-09 05:04:39.203014 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:04:39.203023 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:04:39.203033 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:04:39.203042 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:04:39.203051 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:04:39.203061 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:04:39.203071 | orchestrator | 2026-02-09 05:04:39.203081 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-09 05:04:39.203092 | orchestrator | Monday 09 February 2026 05:04:34 +0000 (0:00:02.478) 0:00:37.491 ******* 2026-02-09 05:04:39.203102 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:04:39.203116 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:04:39.203126 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:04:39.203135 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:04:39.203145 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:04:39.203154 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:04:39.203164 | orchestrator | 2026-02-09 05:04:39.203174 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-09 05:04:39.203184 | orchestrator | Monday 09 February 2026 05:04:36 +0000 (0:00:02.181) 0:00:39.672 ******* 2026-02-09 05:04:39.203201 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-09 05:04:39.203211 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-09 05:04:39.203221 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:04:39.203230 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-09 05:04:39.203240 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-09 05:04:39.203249 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:04:39.203259 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-09 05:04:39.203269 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-09 05:04:39.203278 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:04:39.203288 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-09 05:04:39.203298 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-09 05:04:39.203307 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:04:39.203317 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-09 05:04:39.203326 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-09 05:04:39.203336 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:04:39.203345 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-09 05:04:39.203355 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-09 05:04:39.203365 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:04:39.203374 | orchestrator | 2026-02-09 05:04:39.203384 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-09 05:04:39.203408 | orchestrator | Monday 09 February 2026 05:04:38 +0000 (0:00:02.232) 0:00:41.905 ******* 2026-02-09 05:04:39.203419 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:04:39.203428 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:04:39.203444 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:06:23.600630 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:06:23.600771 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:06:23.600787 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:06:23.600799 | orchestrator | 2026-02-09 05:06:23.600811 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-09 05:06:23.600823 | orchestrator | Monday 09 February 2026 05:04:40 +0000 (0:00:02.092) 0:00:43.997 ******* 2026-02-09 05:06:23.600833 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:06:23.600843 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:06:23.600853 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:06:23.600862 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:06:23.600872 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:06:23.600882 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:06:23.600892 | orchestrator | 2026-02-09 05:06:23.600902 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-09 05:06:23.600912 | orchestrator | 2026-02-09 05:06:23.600923 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-09 05:06:23.600934 | orchestrator | Monday 09 February 2026 05:04:43 +0000 (0:00:03.150) 0:00:47.147 ******* 2026-02-09 05:06:23.600943 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.600976 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.600987 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.600997 | orchestrator | 2026-02-09 05:06:23.601007 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-09 05:06:23.601021 | orchestrator | Monday 09 February 2026 05:04:45 +0000 (0:00:02.090) 0:00:49.238 ******* 2026-02-09 05:06:23.601031 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.601040 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.601050 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.601096 | orchestrator | 2026-02-09 05:06:23.601106 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-09 05:06:23.601116 | orchestrator | Monday 09 February 2026 05:04:49 +0000 (0:00:03.201) 0:00:52.440 ******* 2026-02-09 05:06:23.601150 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:06:23.601163 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:06:23.601174 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:06:23.601186 | orchestrator | 2026-02-09 05:06:23.601198 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-09 05:06:23.601210 | orchestrator | Monday 09 February 2026 05:04:51 +0000 (0:00:02.384) 0:00:54.825 ******* 2026-02-09 05:06:23.601222 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.601233 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.601245 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.601257 | orchestrator | 2026-02-09 05:06:23.601269 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-09 05:06:23.601280 | orchestrator | Monday 09 February 2026 05:04:53 +0000 (0:00:02.091) 0:00:56.916 ******* 2026-02-09 05:06:23.601292 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:06:23.601304 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:06:23.601316 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:06:23.601328 | orchestrator | 2026-02-09 05:06:23.601340 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-09 05:06:23.601351 | orchestrator | Monday 09 February 2026 05:04:55 +0000 (0:00:01.481) 0:00:58.398 ******* 2026-02-09 05:06:23.601364 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.601376 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.601387 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.601396 | orchestrator | 2026-02-09 05:06:23.601406 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-09 05:06:23.601416 | orchestrator | Monday 09 February 2026 05:04:56 +0000 (0:00:01.828) 0:01:00.226 ******* 2026-02-09 05:06:23.601426 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.601435 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.601445 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.601454 | orchestrator | 2026-02-09 05:06:23.601464 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-09 05:06:23.601474 | orchestrator | Monday 09 February 2026 05:04:59 +0000 (0:00:02.430) 0:01:02.657 ******* 2026-02-09 05:06:23.601484 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:06:23.601494 | orchestrator | 2026-02-09 05:06:23.601503 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-09 05:06:23.601513 | orchestrator | Monday 09 February 2026 05:05:01 +0000 (0:00:02.096) 0:01:04.754 ******* 2026-02-09 05:06:23.601522 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.601532 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.601542 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.601551 | orchestrator | 2026-02-09 05:06:23.601561 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-09 05:06:23.601570 | orchestrator | Monday 09 February 2026 05:05:04 +0000 (0:00:02.675) 0:01:07.429 ******* 2026-02-09 05:06:23.601580 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:06:23.601590 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.601599 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:06:23.601609 | orchestrator | 2026-02-09 05:06:23.601619 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-09 05:06:23.601628 | orchestrator | Monday 09 February 2026 05:05:05 +0000 (0:00:01.824) 0:01:09.254 ******* 2026-02-09 05:06:23.601638 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:06:23.601648 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:06:23.601657 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:06:23.601667 | orchestrator | 2026-02-09 05:06:23.601676 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-09 05:06:23.601686 | orchestrator | Monday 09 February 2026 05:05:07 +0000 (0:00:01.995) 0:01:11.250 ******* 2026-02-09 05:06:23.601696 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:06:23.601706 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:06:23.601715 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:06:23.601733 | orchestrator | 2026-02-09 05:06:23.601743 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-09 05:06:23.601753 | orchestrator | Monday 09 February 2026 05:05:10 +0000 (0:00:02.708) 0:01:13.959 ******* 2026-02-09 05:06:23.601763 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:06:23.601772 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:06:23.601802 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:06:23.601812 | orchestrator | 2026-02-09 05:06:23.601822 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-09 05:06:23.601832 | orchestrator | Monday 09 February 2026 05:05:12 +0000 (0:00:01.501) 0:01:15.461 ******* 2026-02-09 05:06:23.601842 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:06:23.601851 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:06:23.601861 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:06:23.601871 | orchestrator | 2026-02-09 05:06:23.601880 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-09 05:06:23.601890 | orchestrator | Monday 09 February 2026 05:05:13 +0000 (0:00:01.787) 0:01:17.249 ******* 2026-02-09 05:06:23.601900 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:06:23.601909 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:06:23.601919 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:06:23.601929 | orchestrator | 2026-02-09 05:06:23.601938 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-09 05:06:23.601948 | orchestrator | Monday 09 February 2026 05:05:16 +0000 (0:00:02.610) 0:01:19.859 ******* 2026-02-09 05:06:23.601958 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.601967 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.601977 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.601987 | orchestrator | 2026-02-09 05:06:23.601996 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-09 05:06:23.602006 | orchestrator | Monday 09 February 2026 05:05:18 +0000 (0:00:02.068) 0:01:21.927 ******* 2026-02-09 05:06:23.602081 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.602095 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.602105 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.602114 | orchestrator | 2026-02-09 05:06:23.602124 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-09 05:06:23.602134 | orchestrator | Monday 09 February 2026 05:05:20 +0000 (0:00:01.614) 0:01:23.542 ******* 2026-02-09 05:06:23.602145 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-09 05:06:23.602157 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-09 05:06:23.602167 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-09 05:06:23.602177 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-09 05:06:23.602187 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-09 05:06:23.602196 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-09 05:06:23.602206 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.602216 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.602226 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.602236 | orchestrator | 2026-02-09 05:06:23.602245 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-09 05:06:23.602255 | orchestrator | Monday 09 February 2026 05:05:44 +0000 (0:00:23.796) 0:01:47.339 ******* 2026-02-09 05:06:23.602265 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:06:23.602275 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:06:23.602294 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:06:23.602304 | orchestrator | 2026-02-09 05:06:23.602313 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-09 05:06:23.602323 | orchestrator | Monday 09 February 2026 05:05:45 +0000 (0:00:01.570) 0:01:48.910 ******* 2026-02-09 05:06:23.602333 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:06:23.602343 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:06:23.602353 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:06:23.602362 | orchestrator | 2026-02-09 05:06:23.602372 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-09 05:06:23.602382 | orchestrator | Monday 09 February 2026 05:05:47 +0000 (0:00:02.286) 0:01:51.196 ******* 2026-02-09 05:06:23.602392 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.602402 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.602411 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.602421 | orchestrator | 2026-02-09 05:06:23.602431 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-09 05:06:23.602441 | orchestrator | Monday 09 February 2026 05:05:50 +0000 (0:00:02.469) 0:01:53.666 ******* 2026-02-09 05:06:23.602451 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:06:23.602460 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:06:23.602470 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:06:23.602480 | orchestrator | 2026-02-09 05:06:23.602489 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-09 05:06:23.602499 | orchestrator | Monday 09 February 2026 05:06:17 +0000 (0:00:27.531) 0:02:21.197 ******* 2026-02-09 05:06:23.602509 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.602519 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.602535 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.602545 | orchestrator | 2026-02-09 05:06:23.602555 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-09 05:06:23.602564 | orchestrator | Monday 09 February 2026 05:06:19 +0000 (0:00:01.805) 0:02:23.003 ******* 2026-02-09 05:06:23.602574 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:06:23.602585 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:06:23.602602 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:06:23.602618 | orchestrator | 2026-02-09 05:06:23.602636 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-09 05:06:23.602653 | orchestrator | Monday 09 February 2026 05:06:21 +0000 (0:00:01.785) 0:02:24.788 ******* 2026-02-09 05:06:23.602670 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:06:23.602687 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:06:23.602702 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:06:23.602719 | orchestrator | 2026-02-09 05:06:23.602750 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-09 05:07:12.914203 | orchestrator | Monday 09 February 2026 05:06:23 +0000 (0:00:02.075) 0:02:26.863 ******* 2026-02-09 05:07:12.914347 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:07:12.914365 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:07:12.914377 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:07:12.914388 | orchestrator | 2026-02-09 05:07:12.914400 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-09 05:07:12.914412 | orchestrator | Monday 09 February 2026 05:06:25 +0000 (0:00:01.948) 0:02:28.812 ******* 2026-02-09 05:07:12.914423 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:07:12.914434 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:07:12.914445 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:07:12.914455 | orchestrator | 2026-02-09 05:07:12.914467 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-09 05:07:12.914478 | orchestrator | Monday 09 February 2026 05:06:26 +0000 (0:00:01.420) 0:02:30.232 ******* 2026-02-09 05:07:12.914489 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:07:12.914503 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:07:12.914514 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:07:12.914524 | orchestrator | 2026-02-09 05:07:12.914535 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-09 05:07:12.914575 | orchestrator | Monday 09 February 2026 05:06:28 +0000 (0:00:01.716) 0:02:31.949 ******* 2026-02-09 05:07:12.914605 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:07:12.914616 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:07:12.914627 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:07:12.914638 | orchestrator | 2026-02-09 05:07:12.914649 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-09 05:07:12.914663 | orchestrator | Monday 09 February 2026 05:06:30 +0000 (0:00:02.054) 0:02:34.003 ******* 2026-02-09 05:07:12.914676 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:07:12.914689 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:07:12.914702 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:07:12.914716 | orchestrator | 2026-02-09 05:07:12.914729 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-09 05:07:12.914743 | orchestrator | Monday 09 February 2026 05:06:32 +0000 (0:00:01.791) 0:02:35.795 ******* 2026-02-09 05:07:12.914755 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:07:12.914767 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:07:12.914780 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:07:12.914793 | orchestrator | 2026-02-09 05:07:12.914806 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-09 05:07:12.914819 | orchestrator | Monday 09 February 2026 05:06:34 +0000 (0:00:02.047) 0:02:37.843 ******* 2026-02-09 05:07:12.914832 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:07:12.914844 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:07:12.914858 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:07:12.914871 | orchestrator | 2026-02-09 05:07:12.914883 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-09 05:07:12.914896 | orchestrator | Monday 09 February 2026 05:06:35 +0000 (0:00:01.425) 0:02:39.269 ******* 2026-02-09 05:07:12.914910 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:07:12.914953 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:07:12.914972 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:07:12.914991 | orchestrator | 2026-02-09 05:07:12.915009 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-09 05:07:12.915028 | orchestrator | Monday 09 February 2026 05:06:37 +0000 (0:00:01.386) 0:02:40.656 ******* 2026-02-09 05:07:12.915044 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:07:12.915061 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:07:12.915079 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:07:12.915098 | orchestrator | 2026-02-09 05:07:12.915116 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-09 05:07:12.915132 | orchestrator | Monday 09 February 2026 05:06:39 +0000 (0:00:01.742) 0:02:42.398 ******* 2026-02-09 05:07:12.915143 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:07:12.915154 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:07:12.915165 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:07:12.915175 | orchestrator | 2026-02-09 05:07:12.915187 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-09 05:07:12.915200 | orchestrator | Monday 09 February 2026 05:06:40 +0000 (0:00:01.661) 0:02:44.060 ******* 2026-02-09 05:07:12.915211 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-09 05:07:12.915223 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-09 05:07:12.915233 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-09 05:07:12.915244 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-09 05:07:12.915255 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-09 05:07:12.915266 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-09 05:07:12.915288 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-09 05:07:12.915299 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-09 05:07:12.915310 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-09 05:07:12.915321 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-09 05:07:12.915332 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-09 05:07:12.915343 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-09 05:07:12.915377 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-09 05:07:12.915397 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-09 05:07:12.915415 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-09 05:07:12.915433 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-09 05:07:12.915450 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-09 05:07:12.915467 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-09 05:07:12.915484 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-09 05:07:12.915502 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-09 05:07:12.915520 | orchestrator | 2026-02-09 05:07:12.915539 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-09 05:07:12.915557 | orchestrator | 2026-02-09 05:07:12.915577 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-09 05:07:12.915590 | orchestrator | Monday 09 February 2026 05:06:45 +0000 (0:00:04.255) 0:02:48.315 ******* 2026-02-09 05:07:12.915601 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:07:12.915612 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:07:12.915622 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:07:12.915633 | orchestrator | 2026-02-09 05:07:12.915644 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-09 05:07:12.915655 | orchestrator | Monday 09 February 2026 05:06:46 +0000 (0:00:01.444) 0:02:49.760 ******* 2026-02-09 05:07:12.915665 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:07:12.915676 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:07:12.915687 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:07:12.915697 | orchestrator | 2026-02-09 05:07:12.915708 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-09 05:07:12.915719 | orchestrator | Monday 09 February 2026 05:06:48 +0000 (0:00:01.658) 0:02:51.418 ******* 2026-02-09 05:07:12.915730 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:07:12.915740 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:07:12.915751 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:07:12.915762 | orchestrator | 2026-02-09 05:07:12.915772 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-09 05:07:12.915783 | orchestrator | Monday 09 February 2026 05:06:49 +0000 (0:00:01.590) 0:02:53.008 ******* 2026-02-09 05:07:12.915794 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 05:07:12.915805 | orchestrator | 2026-02-09 05:07:12.915816 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-09 05:07:12.915826 | orchestrator | Monday 09 February 2026 05:06:51 +0000 (0:00:01.713) 0:02:54.722 ******* 2026-02-09 05:07:12.915837 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:07:12.915848 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:07:12.915858 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:07:12.915879 | orchestrator | 2026-02-09 05:07:12.915890 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-09 05:07:12.915906 | orchestrator | Monday 09 February 2026 05:06:52 +0000 (0:00:01.418) 0:02:56.140 ******* 2026-02-09 05:07:12.915975 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:07:12.915996 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:07:12.916014 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:07:12.916031 | orchestrator | 2026-02-09 05:07:12.916042 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-09 05:07:12.916053 | orchestrator | Monday 09 February 2026 05:06:54 +0000 (0:00:01.672) 0:02:57.813 ******* 2026-02-09 05:07:12.916064 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:07:12.916074 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:07:12.916085 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:07:12.916095 | orchestrator | 2026-02-09 05:07:12.916106 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-09 05:07:12.916117 | orchestrator | Monday 09 February 2026 05:06:55 +0000 (0:00:01.383) 0:02:59.196 ******* 2026-02-09 05:07:12.916128 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:07:12.916138 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:07:12.916162 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:07:12.916173 | orchestrator | 2026-02-09 05:07:12.916185 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-09 05:07:12.916196 | orchestrator | Monday 09 February 2026 05:06:57 +0000 (0:00:01.733) 0:03:00.930 ******* 2026-02-09 05:07:12.916206 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:07:12.916217 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:07:12.916228 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:07:12.916239 | orchestrator | 2026-02-09 05:07:12.916249 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-09 05:07:12.916260 | orchestrator | Monday 09 February 2026 05:06:59 +0000 (0:00:02.141) 0:03:03.072 ******* 2026-02-09 05:07:12.916271 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:07:12.916282 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:07:12.916292 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:07:12.916303 | orchestrator | 2026-02-09 05:07:12.916314 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-09 05:07:12.916324 | orchestrator | Monday 09 February 2026 05:07:02 +0000 (0:00:02.381) 0:03:05.453 ******* 2026-02-09 05:07:12.916335 | orchestrator | changed: [testbed-node-5] 2026-02-09 05:07:12.916346 | orchestrator | changed: [testbed-node-4] 2026-02-09 05:07:12.916356 | orchestrator | changed: [testbed-node-3] 2026-02-09 05:07:12.916367 | orchestrator | 2026-02-09 05:07:12.916378 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-09 05:07:12.916388 | orchestrator | 2026-02-09 05:07:12.916399 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-09 05:07:12.916410 | orchestrator | Monday 09 February 2026 05:07:10 +0000 (0:00:08.570) 0:03:14.023 ******* 2026-02-09 05:07:12.916421 | orchestrator | ok: [testbed-manager] 2026-02-09 05:07:12.916431 | orchestrator | 2026-02-09 05:07:12.916442 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-09 05:07:12.916463 | orchestrator | Monday 09 February 2026 05:07:12 +0000 (0:00:02.155) 0:03:16.179 ******* 2026-02-09 05:08:22.618391 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:22.618524 | orchestrator | 2026-02-09 05:08:22.618548 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-09 05:08:22.618565 | orchestrator | Monday 09 February 2026 05:07:14 +0000 (0:00:01.425) 0:03:17.604 ******* 2026-02-09 05:08:22.618581 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-09 05:08:22.618597 | orchestrator | 2026-02-09 05:08:22.618611 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-09 05:08:22.618626 | orchestrator | Monday 09 February 2026 05:07:15 +0000 (0:00:01.528) 0:03:19.133 ******* 2026-02-09 05:08:22.618640 | orchestrator | changed: [testbed-manager] 2026-02-09 05:08:22.618656 | orchestrator | 2026-02-09 05:08:22.618697 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-09 05:08:22.618713 | orchestrator | Monday 09 February 2026 05:07:17 +0000 (0:00:01.944) 0:03:21.078 ******* 2026-02-09 05:08:22.618728 | orchestrator | changed: [testbed-manager] 2026-02-09 05:08:22.618743 | orchestrator | 2026-02-09 05:08:22.618797 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-09 05:08:22.618830 | orchestrator | Monday 09 February 2026 05:07:19 +0000 (0:00:01.613) 0:03:22.691 ******* 2026-02-09 05:08:22.618846 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-09 05:08:22.618860 | orchestrator | 2026-02-09 05:08:22.618874 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-09 05:08:22.618888 | orchestrator | Monday 09 February 2026 05:07:22 +0000 (0:00:03.001) 0:03:25.693 ******* 2026-02-09 05:08:22.618902 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-09 05:08:22.618918 | orchestrator | 2026-02-09 05:08:22.618934 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-09 05:08:22.618949 | orchestrator | Monday 09 February 2026 05:07:24 +0000 (0:00:01.880) 0:03:27.573 ******* 2026-02-09 05:08:22.618964 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:22.618981 | orchestrator | 2026-02-09 05:08:22.618997 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-09 05:08:22.619011 | orchestrator | Monday 09 February 2026 05:07:25 +0000 (0:00:01.585) 0:03:29.159 ******* 2026-02-09 05:08:22.619026 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:22.619040 | orchestrator | 2026-02-09 05:08:22.619054 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-09 05:08:22.619068 | orchestrator | 2026-02-09 05:08:22.619084 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-09 05:08:22.619099 | orchestrator | Monday 09 February 2026 05:07:27 +0000 (0:00:01.601) 0:03:30.761 ******* 2026-02-09 05:08:22.619114 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:22.619128 | orchestrator | 2026-02-09 05:08:22.619143 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-09 05:08:22.619157 | orchestrator | Monday 09 February 2026 05:07:28 +0000 (0:00:01.145) 0:03:31.906 ******* 2026-02-09 05:08:22.619172 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-09 05:08:22.619189 | orchestrator | 2026-02-09 05:08:22.619204 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-09 05:08:22.619219 | orchestrator | Monday 09 February 2026 05:07:30 +0000 (0:00:01.519) 0:03:33.426 ******* 2026-02-09 05:08:22.619234 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:22.619249 | orchestrator | 2026-02-09 05:08:22.619264 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-09 05:08:22.619278 | orchestrator | Monday 09 February 2026 05:07:32 +0000 (0:00:01.928) 0:03:35.354 ******* 2026-02-09 05:08:22.619293 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:22.619307 | orchestrator | 2026-02-09 05:08:22.619321 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-09 05:08:22.619336 | orchestrator | Monday 09 February 2026 05:07:34 +0000 (0:00:02.658) 0:03:38.012 ******* 2026-02-09 05:08:22.619350 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:22.619364 | orchestrator | 2026-02-09 05:08:22.619379 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-09 05:08:22.619392 | orchestrator | Monday 09 February 2026 05:07:36 +0000 (0:00:01.494) 0:03:39.507 ******* 2026-02-09 05:08:22.619407 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:22.619422 | orchestrator | 2026-02-09 05:08:22.619437 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-09 05:08:22.619452 | orchestrator | Monday 09 February 2026 05:07:37 +0000 (0:00:01.481) 0:03:40.988 ******* 2026-02-09 05:08:22.619467 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:22.619481 | orchestrator | 2026-02-09 05:08:22.619496 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-09 05:08:22.619524 | orchestrator | Monday 09 February 2026 05:07:39 +0000 (0:00:01.630) 0:03:42.619 ******* 2026-02-09 05:08:22.619539 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:22.619553 | orchestrator | 2026-02-09 05:08:22.619566 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-09 05:08:22.619581 | orchestrator | Monday 09 February 2026 05:07:41 +0000 (0:00:02.494) 0:03:45.114 ******* 2026-02-09 05:08:22.619595 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:22.619610 | orchestrator | 2026-02-09 05:08:22.619624 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-09 05:08:22.619638 | orchestrator | 2026-02-09 05:08:22.619652 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-09 05:08:22.619666 | orchestrator | Monday 09 February 2026 05:07:43 +0000 (0:00:01.717) 0:03:46.831 ******* 2026-02-09 05:08:22.619680 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:08:22.619695 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:08:22.619709 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:08:22.619724 | orchestrator | 2026-02-09 05:08:22.619738 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-09 05:08:22.619775 | orchestrator | Monday 09 February 2026 05:07:44 +0000 (0:00:01.325) 0:03:48.157 ******* 2026-02-09 05:08:22.619791 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:08:22.619805 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:08:22.619819 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:08:22.619833 | orchestrator | 2026-02-09 05:08:22.619869 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-09 05:08:22.619883 | orchestrator | Monday 09 February 2026 05:07:46 +0000 (0:00:01.621) 0:03:49.778 ******* 2026-02-09 05:08:22.619898 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:08:22.619913 | orchestrator | 2026-02-09 05:08:22.619928 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-09 05:08:22.619942 | orchestrator | Monday 09 February 2026 05:07:48 +0000 (0:00:01.689) 0:03:51.468 ******* 2026-02-09 05:08:22.619956 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-09 05:08:22.619971 | orchestrator | 2026-02-09 05:08:22.619985 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-09 05:08:22.619999 | orchestrator | Monday 09 February 2026 05:07:49 +0000 (0:00:01.780) 0:03:53.249 ******* 2026-02-09 05:08:22.620014 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 05:08:22.620028 | orchestrator | 2026-02-09 05:08:22.620042 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-09 05:08:22.620057 | orchestrator | Monday 09 February 2026 05:07:51 +0000 (0:00:01.870) 0:03:55.119 ******* 2026-02-09 05:08:22.620072 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:08:22.620085 | orchestrator | 2026-02-09 05:08:22.620099 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-09 05:08:22.620113 | orchestrator | Monday 09 February 2026 05:07:52 +0000 (0:00:01.124) 0:03:56.244 ******* 2026-02-09 05:08:22.620126 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 05:08:22.620139 | orchestrator | 2026-02-09 05:08:22.620154 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-09 05:08:22.620170 | orchestrator | Monday 09 February 2026 05:07:55 +0000 (0:00:02.042) 0:03:58.287 ******* 2026-02-09 05:08:22.620185 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 05:08:22.620201 | orchestrator | 2026-02-09 05:08:22.620215 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-09 05:08:22.620230 | orchestrator | Monday 09 February 2026 05:07:57 +0000 (0:00:02.340) 0:04:00.627 ******* 2026-02-09 05:08:22.620244 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 05:08:22.620258 | orchestrator | 2026-02-09 05:08:22.620272 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-09 05:08:22.620286 | orchestrator | Monday 09 February 2026 05:07:58 +0000 (0:00:01.155) 0:04:01.783 ******* 2026-02-09 05:08:22.620311 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 05:08:22.620326 | orchestrator | 2026-02-09 05:08:22.620341 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-09 05:08:22.620355 | orchestrator | Monday 09 February 2026 05:07:59 +0000 (0:00:01.141) 0:04:02.924 ******* 2026-02-09 05:08:22.620369 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-09 05:08:22.620383 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-09 05:08:22.620400 | orchestrator | } 2026-02-09 05:08:22.620414 | orchestrator | 2026-02-09 05:08:22.620428 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-09 05:08:22.620443 | orchestrator | Monday 09 February 2026 05:08:00 +0000 (0:00:01.163) 0:04:04.087 ******* 2026-02-09 05:08:22.620457 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:08:22.620471 | orchestrator | 2026-02-09 05:08:22.620485 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-09 05:08:22.620498 | orchestrator | Monday 09 February 2026 05:08:01 +0000 (0:00:01.138) 0:04:05.226 ******* 2026-02-09 05:08:22.620513 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-09 05:08:22.620526 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-09 05:08:22.620540 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-09 05:08:22.620555 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-09 05:08:22.620568 | orchestrator | 2026-02-09 05:08:22.620582 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-09 05:08:22.620596 | orchestrator | Monday 09 February 2026 05:08:07 +0000 (0:00:05.664) 0:04:10.891 ******* 2026-02-09 05:08:22.620610 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-09 05:08:22.620625 | orchestrator | 2026-02-09 05:08:22.620639 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-09 05:08:22.620653 | orchestrator | Monday 09 February 2026 05:08:10 +0000 (0:00:02.511) 0:04:13.402 ******* 2026-02-09 05:08:22.620668 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-09 05:08:22.620682 | orchestrator | 2026-02-09 05:08:22.620698 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-09 05:08:22.620724 | orchestrator | Monday 09 February 2026 05:08:12 +0000 (0:00:02.615) 0:04:16.018 ******* 2026-02-09 05:08:22.620739 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-09 05:08:22.620786 | orchestrator | 2026-02-09 05:08:22.620803 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-09 05:08:22.620817 | orchestrator | Monday 09 February 2026 05:08:17 +0000 (0:00:04.280) 0:04:20.298 ******* 2026-02-09 05:08:22.620831 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:08:22.620845 | orchestrator | 2026-02-09 05:08:22.620860 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-09 05:08:22.620874 | orchestrator | Monday 09 February 2026 05:08:18 +0000 (0:00:01.202) 0:04:21.500 ******* 2026-02-09 05:08:22.620889 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-09 05:08:22.620904 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-09 05:08:22.620919 | orchestrator | 2026-02-09 05:08:22.620933 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-09 05:08:22.620947 | orchestrator | Monday 09 February 2026 05:08:21 +0000 (0:00:03.012) 0:04:24.513 ******* 2026-02-09 05:08:22.620962 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:08:22.620989 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:08:49.207984 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:08:49.208117 | orchestrator | 2026-02-09 05:08:49.208130 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-09 05:08:49.208140 | orchestrator | Monday 09 February 2026 05:08:22 +0000 (0:00:01.374) 0:04:25.888 ******* 2026-02-09 05:08:49.208173 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:08:49.208182 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:08:49.208189 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:08:49.208196 | orchestrator | 2026-02-09 05:08:49.208204 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-09 05:08:49.208211 | orchestrator | 2026-02-09 05:08:49.208219 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-09 05:08:49.208226 | orchestrator | Monday 09 February 2026 05:08:24 +0000 (0:00:02.066) 0:04:27.954 ******* 2026-02-09 05:08:49.208233 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:49.208240 | orchestrator | 2026-02-09 05:08:49.208248 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-09 05:08:49.208255 | orchestrator | Monday 09 February 2026 05:08:25 +0000 (0:00:01.104) 0:04:29.058 ******* 2026-02-09 05:08:49.208278 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-09 05:08:49.208287 | orchestrator | 2026-02-09 05:08:49.208294 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-09 05:08:49.208301 | orchestrator | Monday 09 February 2026 05:08:27 +0000 (0:00:01.440) 0:04:30.499 ******* 2026-02-09 05:08:49.208308 | orchestrator | ok: [testbed-manager] 2026-02-09 05:08:49.208315 | orchestrator | 2026-02-09 05:08:49.208322 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-09 05:08:49.208329 | orchestrator | 2026-02-09 05:08:49.208336 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-09 05:08:49.208343 | orchestrator | Monday 09 February 2026 05:08:32 +0000 (0:00:05.276) 0:04:35.776 ******* 2026-02-09 05:08:49.208350 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:08:49.208357 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:08:49.208364 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:08:49.208371 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:08:49.208378 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:08:49.208385 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:08:49.208392 | orchestrator | 2026-02-09 05:08:49.208408 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-09 05:08:49.208416 | orchestrator | Monday 09 February 2026 05:08:34 +0000 (0:00:01.914) 0:04:37.690 ******* 2026-02-09 05:08:49.208423 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-09 05:08:49.208431 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-09 05:08:49.208438 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-09 05:08:49.208445 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-09 05:08:49.208452 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-09 05:08:49.208459 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-09 05:08:49.208466 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-09 05:08:49.208473 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-09 05:08:49.208480 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-09 05:08:49.208487 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-09 05:08:49.208495 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-09 05:08:49.208503 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-09 05:08:49.208512 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-09 05:08:49.208520 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-09 05:08:49.208528 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-09 05:08:49.208543 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-09 05:08:49.208552 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-09 05:08:49.208561 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-09 05:08:49.208569 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-09 05:08:49.208578 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-09 05:08:49.208586 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-09 05:08:49.208595 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-09 05:08:49.208603 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-09 05:08:49.208611 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-09 05:08:49.208620 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-09 05:08:49.208629 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-09 05:08:49.208652 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-09 05:08:49.208661 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-09 05:08:49.208670 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-09 05:08:49.208678 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-09 05:08:49.208687 | orchestrator | 2026-02-09 05:08:49.208713 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-09 05:08:49.208721 | orchestrator | Monday 09 February 2026 05:08:43 +0000 (0:00:09.489) 0:04:47.180 ******* 2026-02-09 05:08:49.208730 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:08:49.208739 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:08:49.208748 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:08:49.208756 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:08:49.208764 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:08:49.208772 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:08:49.208781 | orchestrator | 2026-02-09 05:08:49.208790 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-09 05:08:49.208798 | orchestrator | Monday 09 February 2026 05:08:46 +0000 (0:00:02.140) 0:04:49.320 ******* 2026-02-09 05:08:49.208806 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:08:49.208815 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:08:49.208824 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:08:49.208833 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:08:49.208842 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:08:49.208850 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:08:49.208859 | orchestrator | 2026-02-09 05:08:49.208867 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:08:49.208874 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 05:08:49.208884 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-09 05:08:49.208892 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-09 05:08:49.208899 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-09 05:08:49.208906 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-09 05:08:49.208919 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-09 05:08:49.208927 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-09 05:08:49.208934 | orchestrator | 2026-02-09 05:08:49.208941 | orchestrator | 2026-02-09 05:08:49.208948 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:08:49.208955 | orchestrator | Monday 09 February 2026 05:08:49 +0000 (0:00:03.139) 0:04:52.459 ******* 2026-02-09 05:08:49.208962 | orchestrator | =============================================================================== 2026-02-09 05:08:49.208969 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.53s 2026-02-09 05:08:49.208977 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.80s 2026-02-09 05:08:49.208985 | orchestrator | Manage labels ----------------------------------------------------------- 9.49s 2026-02-09 05:08:49.208992 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.57s 2026-02-09 05:08:49.208999 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.66s 2026-02-09 05:08:49.209006 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.28s 2026-02-09 05:08:49.209013 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.51s 2026-02-09 05:08:49.209020 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.28s 2026-02-09 05:08:49.209027 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.26s 2026-02-09 05:08:49.209034 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 3.20s 2026-02-09 05:08:49.209041 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 3.15s 2026-02-09 05:08:49.209048 | orchestrator | Manage taints ----------------------------------------------------------- 3.14s 2026-02-09 05:08:49.209055 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 3.01s 2026-02-09 05:08:49.209062 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.00s 2026-02-09 05:08:49.209069 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.89s 2026-02-09 05:08:49.209076 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.79s 2026-02-09 05:08:49.209083 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.71s 2026-02-09 05:08:49.209091 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.71s 2026-02-09 05:08:49.209102 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.68s 2026-02-09 05:08:49.653363 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.68s 2026-02-09 05:08:49.963039 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-09 05:08:49.963169 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-09 05:08:49.969346 | orchestrator | + set -e 2026-02-09 05:08:49.969374 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 05:08:49.969387 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 05:08:49.969399 | orchestrator | ++ INTERACTIVE=false 2026-02-09 05:08:49.969410 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 05:08:49.969420 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 05:08:49.969431 | orchestrator | + osism apply openstackclient 2026-02-09 05:09:02.219599 | orchestrator | 2026-02-09 05:09:02 | INFO  | Task 337881a3-cadc-46b8-af08-55dc0939e139 (openstackclient) was prepared for execution. 2026-02-09 05:09:02.219790 | orchestrator | 2026-02-09 05:09:02 | INFO  | It takes a moment until task 337881a3-cadc-46b8-af08-55dc0939e139 (openstackclient) has been started and output is visible here. 2026-02-09 05:09:27.714471 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-09 05:09:27.714615 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-09 05:09:27.714719 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-09 05:09:27.714731 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-09 05:09:27.714756 | orchestrator | 2026-02-09 05:09:27.714770 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-09 05:09:27.714781 | orchestrator | 2026-02-09 05:09:27.714791 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-09 05:09:27.714798 | orchestrator | Monday 09 February 2026 05:09:08 +0000 (0:00:01.699) 0:00:01.699 ******* 2026-02-09 05:09:27.714806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-09 05:09:27.714815 | orchestrator | 2026-02-09 05:09:27.714822 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-09 05:09:27.714829 | orchestrator | Monday 09 February 2026 05:09:09 +0000 (0:00:00.830) 0:00:02.530 ******* 2026-02-09 05:09:27.714836 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-09 05:09:27.714842 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-09 05:09:27.714849 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-09 05:09:27.714857 | orchestrator | 2026-02-09 05:09:27.714863 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-09 05:09:27.714870 | orchestrator | Monday 09 February 2026 05:09:10 +0000 (0:00:01.179) 0:00:03.709 ******* 2026-02-09 05:09:27.714877 | orchestrator | changed: [testbed-manager] 2026-02-09 05:09:27.714884 | orchestrator | 2026-02-09 05:09:27.714891 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-09 05:09:27.714897 | orchestrator | Monday 09 February 2026 05:09:11 +0000 (0:00:01.156) 0:00:04.866 ******* 2026-02-09 05:09:27.714905 | orchestrator | ok: [testbed-manager] 2026-02-09 05:09:27.714914 | orchestrator | 2026-02-09 05:09:27.714920 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-09 05:09:27.714927 | orchestrator | Monday 09 February 2026 05:09:12 +0000 (0:00:01.025) 0:00:05.891 ******* 2026-02-09 05:09:27.714934 | orchestrator | ok: [testbed-manager] 2026-02-09 05:09:27.714940 | orchestrator | 2026-02-09 05:09:27.714947 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-09 05:09:27.714955 | orchestrator | Monday 09 February 2026 05:09:13 +0000 (0:00:00.899) 0:00:06.790 ******* 2026-02-09 05:09:27.714964 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-09 05:09:27.714972 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-09 05:09:27.714988 | orchestrator | ok: [testbed-manager] 2026-02-09 05:09:27.714996 | orchestrator | 2026-02-09 05:09:27.715004 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-09 05:09:27.715012 | orchestrator | Monday 09 February 2026 05:09:14 +0000 (0:00:00.740) 0:00:07.531 ******* 2026-02-09 05:09:27.715020 | orchestrator | changed: [testbed-manager] 2026-02-09 05:09:27.715027 | orchestrator | 2026-02-09 05:09:27.715034 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-09 05:09:27.715041 | orchestrator | Monday 09 February 2026 05:09:24 +0000 (0:00:09.941) 0:00:17.472 ******* 2026-02-09 05:09:27.715047 | orchestrator | changed: [testbed-manager] 2026-02-09 05:09:27.715054 | orchestrator | 2026-02-09 05:09:27.715086 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-09 05:09:27.715093 | orchestrator | Monday 09 February 2026 05:09:25 +0000 (0:00:01.298) 0:00:18.771 ******* 2026-02-09 05:09:27.715100 | orchestrator | changed: [testbed-manager] 2026-02-09 05:09:27.715106 | orchestrator | 2026-02-09 05:09:27.715113 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-09 05:09:27.715120 | orchestrator | Monday 09 February 2026 05:09:26 +0000 (0:00:00.646) 0:00:19.417 ******* 2026-02-09 05:09:27.715126 | orchestrator | ok: [testbed-manager] 2026-02-09 05:09:27.715133 | orchestrator | 2026-02-09 05:09:27.715140 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:09:27.715146 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-09 05:09:27.715154 | orchestrator | 2026-02-09 05:09:27.715160 | orchestrator | 2026-02-09 05:09:27.715167 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:09:27.715174 | orchestrator | Monday 09 February 2026 05:09:27 +0000 (0:00:01.226) 0:00:20.644 ******* 2026-02-09 05:09:27.715180 | orchestrator | =============================================================================== 2026-02-09 05:09:27.715187 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 9.94s 2026-02-09 05:09:27.715193 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.30s 2026-02-09 05:09:27.715200 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.23s 2026-02-09 05:09:27.715207 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.18s 2026-02-09 05:09:27.715213 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.16s 2026-02-09 05:09:27.715220 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 1.03s 2026-02-09 05:09:27.715242 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.90s 2026-02-09 05:09:27.715256 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.83s 2026-02-09 05:09:27.715263 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.74s 2026-02-09 05:09:27.715269 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.65s 2026-02-09 05:09:28.039817 | orchestrator | + osism apply -a upgrade common 2026-02-09 05:09:30.150232 | orchestrator | 2026-02-09 05:09:30 | INFO  | Task 67381a0c-c36d-44c9-b555-e2ee3d71ecc0 (common) was prepared for execution. 2026-02-09 05:09:30.150332 | orchestrator | 2026-02-09 05:09:30 | INFO  | It takes a moment until task 67381a0c-c36d-44c9-b555-e2ee3d71ecc0 (common) has been started and output is visible here. 2026-02-09 05:09:49.391284 | orchestrator | 2026-02-09 05:09:49.391423 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-09 05:09:49.391440 | orchestrator | 2026-02-09 05:09:49.391452 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-09 05:09:49.391462 | orchestrator | Monday 09 February 2026 05:09:36 +0000 (0:00:02.360) 0:00:02.360 ******* 2026-02-09 05:09:49.391473 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 05:09:49.391484 | orchestrator | 2026-02-09 05:09:49.391494 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-09 05:09:49.391504 | orchestrator | Monday 09 February 2026 05:09:40 +0000 (0:00:03.147) 0:00:05.507 ******* 2026-02-09 05:09:49.391514 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:09:49.391524 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:09:49.391533 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:09:49.391543 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:09:49.391583 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:09:49.391619 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:09:49.391629 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:09:49.391638 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:09:49.391648 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:09:49.391657 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:09:49.391667 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:09:49.391676 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:09:49.391686 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:09:49.391695 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:09:49.391705 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:09:49.391714 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:09:49.391724 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:09:49.391733 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:09:49.391743 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:09:49.391752 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:09:49.391761 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:09:49.391771 | orchestrator | 2026-02-09 05:09:49.391780 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-09 05:09:49.391790 | orchestrator | Monday 09 February 2026 05:09:43 +0000 (0:00:03.507) 0:00:09.014 ******* 2026-02-09 05:09:49.391799 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 05:09:49.391811 | orchestrator | 2026-02-09 05:09:49.391821 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-09 05:09:49.391830 | orchestrator | Monday 09 February 2026 05:09:46 +0000 (0:00:03.083) 0:00:12.098 ******* 2026-02-09 05:09:49.391846 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:09:49.391878 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:09:49.391918 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:09:49.391939 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:09:49.391949 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:09:49.391959 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:09:49.392152 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:49.392166 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:09:49.392176 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:49.392203 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.331656 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.331852 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.331880 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.331917 | orchestrator | [0;32mok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.331943 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.331965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.331987 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.332076 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.332098 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.332118 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.332145 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:09:52.332167 | orchestrator | 2026-02-09 05:09:52.332189 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-09 05:09:52.332209 | orchestrator | Monday 09 February 2026 05:09:51 +0000 (0:00:04.772) 0:00:16.871 ******* 2026-02-09 05:09:52.332235 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:52.332259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:52.332279 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:52.332314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:52.332358 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:54.787620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:54.787727 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:09:54.787744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:54.787757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:54.787858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:54.787873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:54.787903 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:09:54.787913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:54.787923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:54.787932 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:09:54.787941 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:09:54.787965 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:54.787975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:54.787988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:54.787998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:54.788007 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:09:54.788016 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:54.788062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:54.788073 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:09:54.788082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:54.788099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:58.493397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:58.493512 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:09:58.493524 | orchestrator | 2026-02-09 05:09:58.493532 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-09 05:09:58.493540 | orchestrator | Monday 09 February 2026 05:09:54 +0000 (0:00:03.308) 0:00:20.179 ******* 2026-02-09 05:09:58.493567 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:58.493613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:58.493621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:58.493651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:58.493657 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:58.493680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:58.493687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:58.493694 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:09:58.493701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:58.493708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:58.493715 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:58.493727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:58.493733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:09:58.493740 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:09:58.493746 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:09:58.493753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:09:58.493759 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:09:58.493779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:10:11.386716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:11.386845 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:11.386861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:11.386895 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:10:11.386907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:10:11.386917 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:11.386927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:11.386936 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:10:11.386945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:11.386955 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:10:11.386964 | orchestrator | 2026-02-09 05:10:11.386974 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-09 05:10:11.386984 | orchestrator | Monday 09 February 2026 05:09:58 +0000 (0:00:03.716) 0:00:23.896 ******* 2026-02-09 05:10:11.386993 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:10:11.387002 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:10:11.387025 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:10:11.387034 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:10:11.387043 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:10:11.387051 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:10:11.387060 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:10:11.387069 | orchestrator | 2026-02-09 05:10:11.387078 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-09 05:10:11.387087 | orchestrator | Monday 09 February 2026 05:10:00 +0000 (0:00:02.264) 0:00:26.160 ******* 2026-02-09 05:10:11.387095 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:10:11.387104 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:10:11.387112 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:10:11.387121 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:10:11.387130 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:10:11.387139 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:10:11.387147 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:10:11.387156 | orchestrator | 2026-02-09 05:10:11.387175 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-09 05:10:11.387192 | orchestrator | Monday 09 February 2026 05:10:02 +0000 (0:00:02.217) 0:00:28.378 ******* 2026-02-09 05:10:11.387206 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:10:11.387219 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:10:11.387232 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:10:11.387244 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:10:11.387256 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:10:11.387268 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:10:11.387280 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:10:11.387293 | orchestrator | 2026-02-09 05:10:11.387306 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-09 05:10:11.387319 | orchestrator | Monday 09 February 2026 05:10:05 +0000 (0:00:02.190) 0:00:30.568 ******* 2026-02-09 05:10:11.387331 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:10:11.387344 | orchestrator | changed: [testbed-manager] 2026-02-09 05:10:11.387356 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:10:11.387369 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:10:11.387382 | orchestrator | changed: [testbed-node-3] 2026-02-09 05:10:11.387395 | orchestrator | changed: [testbed-node-4] 2026-02-09 05:10:11.387407 | orchestrator | changed: [testbed-node-5] 2026-02-09 05:10:11.387420 | orchestrator | 2026-02-09 05:10:11.387432 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-09 05:10:11.387445 | orchestrator | Monday 09 February 2026 05:10:08 +0000 (0:00:03.005) 0:00:33.573 ******* 2026-02-09 05:10:11.387458 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:11.387473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:11.387486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:11.387500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:11.387529 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:13.281261 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281368 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:13.281386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:13.281399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281493 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281709 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:13.281725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:33.955885 | orchestrator | 2026-02-09 05:10:33.956019 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-09 05:10:33.956047 | orchestrator | Monday 09 February 2026 05:10:13 +0000 (0:00:05.112) 0:00:38.686 ******* 2026-02-09 05:10:33.956065 | orchestrator | [WARNING]: Skipped 2026-02-09 05:10:33.956103 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-09 05:10:33.956122 | orchestrator | to this access issue: 2026-02-09 05:10:33.956140 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-09 05:10:33.956156 | orchestrator | directory 2026-02-09 05:10:33.956173 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 05:10:33.956190 | orchestrator | 2026-02-09 05:10:33.956207 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-09 05:10:33.956224 | orchestrator | Monday 09 February 2026 05:10:15 +0000 (0:00:02.396) 0:00:41.082 ******* 2026-02-09 05:10:33.956240 | orchestrator | [WARNING]: Skipped 2026-02-09 05:10:33.956258 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-09 05:10:33.956273 | orchestrator | to this access issue: 2026-02-09 05:10:33.956289 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-09 05:10:33.956305 | orchestrator | directory 2026-02-09 05:10:33.956320 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 05:10:33.956337 | orchestrator | 2026-02-09 05:10:33.956355 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-09 05:10:33.956372 | orchestrator | Monday 09 February 2026 05:10:17 +0000 (0:00:01.828) 0:00:42.911 ******* 2026-02-09 05:10:33.956388 | orchestrator | [WARNING]: Skipped 2026-02-09 05:10:33.956405 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-09 05:10:33.956421 | orchestrator | to this access issue: 2026-02-09 05:10:33.956438 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-09 05:10:33.956454 | orchestrator | directory 2026-02-09 05:10:33.956470 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 05:10:33.956486 | orchestrator | 2026-02-09 05:10:33.956503 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-09 05:10:33.956578 | orchestrator | Monday 09 February 2026 05:10:19 +0000 (0:00:01.878) 0:00:44.790 ******* 2026-02-09 05:10:33.956600 | orchestrator | [WARNING]: Skipped 2026-02-09 05:10:33.956618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-09 05:10:33.956635 | orchestrator | to this access issue: 2026-02-09 05:10:33.956653 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-09 05:10:33.956671 | orchestrator | directory 2026-02-09 05:10:33.956689 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 05:10:33.956737 | orchestrator | 2026-02-09 05:10:33.956755 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-09 05:10:33.956772 | orchestrator | Monday 09 February 2026 05:10:21 +0000 (0:00:01.825) 0:00:46.615 ******* 2026-02-09 05:10:33.956787 | orchestrator | changed: [testbed-manager] 2026-02-09 05:10:33.956802 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:10:33.956819 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:10:33.956835 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:10:33.956851 | orchestrator | changed: [testbed-node-3] 2026-02-09 05:10:33.956866 | orchestrator | changed: [testbed-node-4] 2026-02-09 05:10:33.956882 | orchestrator | changed: [testbed-node-5] 2026-02-09 05:10:33.956897 | orchestrator | 2026-02-09 05:10:33.956913 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-09 05:10:33.956929 | orchestrator | Monday 09 February 2026 05:10:25 +0000 (0:00:03.948) 0:00:50.564 ******* 2026-02-09 05:10:33.956945 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:10:33.956963 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:10:33.956979 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:10:33.956995 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:10:33.957011 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:10:33.957021 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:10:33.957030 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:10:33.957040 | orchestrator | 2026-02-09 05:10:33.957049 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-09 05:10:33.957059 | orchestrator | Monday 09 February 2026 05:10:28 +0000 (0:00:03.090) 0:00:53.655 ******* 2026-02-09 05:10:33.957068 | orchestrator | ok: [testbed-manager] 2026-02-09 05:10:33.957078 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:10:33.957087 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:10:33.957097 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:10:33.957106 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:10:33.957115 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:10:33.957125 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:10:33.957134 | orchestrator | 2026-02-09 05:10:33.957143 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-09 05:10:33.957153 | orchestrator | Monday 09 February 2026 05:10:31 +0000 (0:00:02.777) 0:00:56.433 ******* 2026-02-09 05:10:33.957185 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:33.957207 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:33.957219 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:33.957241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:33.957261 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:33.957280 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:33.957296 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:33.957312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:33.957347 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:43.066247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:43.066401 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:43.066415 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:43.066427 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:43.066438 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:43.066448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:43.066457 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:43.066502 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:43.066570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:43.066580 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:43.066590 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:43.066599 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:43.066609 | orchestrator | 2026-02-09 05:10:43.066619 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-09 05:10:43.066630 | orchestrator | Monday 09 February 2026 05:10:33 +0000 (0:00:02.918) 0:00:59.352 ******* 2026-02-09 05:10:43.066639 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:10:43.066649 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:10:43.066657 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:10:43.066666 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:10:43.066675 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:10:43.066683 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:10:43.066692 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:10:43.066701 | orchestrator | 2026-02-09 05:10:43.066710 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-09 05:10:43.066718 | orchestrator | Monday 09 February 2026 05:10:37 +0000 (0:00:03.140) 0:01:02.492 ******* 2026-02-09 05:10:43.066728 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:10:43.066736 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:10:43.066745 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:10:43.066754 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:10:43.066770 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:10:43.066778 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:10:43.066787 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:10:43.066796 | orchestrator | 2026-02-09 05:10:43.066804 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-09 05:10:43.066814 | orchestrator | Monday 09 February 2026 05:10:40 +0000 (0:00:03.494) 0:01:05.987 ******* 2026-02-09 05:10:43.066840 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:45.125462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:45.125592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:45.125602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:45.125608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:45.125613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:45.125619 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:10:45.125667 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:45.125690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:45.125697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:45.125707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:45.125713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:45.125718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:45.125729 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:45.125740 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:45.125759 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:47.912187 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:47.912294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:47.912309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:47.912322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:47.912334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:10:47.912371 | orchestrator | 2026-02-09 05:10:47.912386 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-09 05:10:47.912398 | orchestrator | Monday 09 February 2026 05:10:45 +0000 (0:00:04.546) 0:01:10.533 ******* 2026-02-09 05:10:47.912410 | orchestrator | changed: [testbed-manager] => { 2026-02-09 05:10:47.912423 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:10:47.912434 | orchestrator | } 2026-02-09 05:10:47.912445 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:10:47.912456 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:10:47.912467 | orchestrator | } 2026-02-09 05:10:47.912477 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:10:47.912488 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:10:47.912563 | orchestrator | } 2026-02-09 05:10:47.912577 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:10:47.912588 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:10:47.912599 | orchestrator | } 2026-02-09 05:10:47.912609 | orchestrator | changed: [testbed-node-3] => { 2026-02-09 05:10:47.912620 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:10:47.912631 | orchestrator | } 2026-02-09 05:10:47.912641 | orchestrator | changed: [testbed-node-4] => { 2026-02-09 05:10:47.912652 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:10:47.912663 | orchestrator | } 2026-02-09 05:10:47.912674 | orchestrator | changed: [testbed-node-5] => { 2026-02-09 05:10:47.912685 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:10:47.912695 | orchestrator | } 2026-02-09 05:10:47.912706 | orchestrator | 2026-02-09 05:10:47.912719 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:10:47.912732 | orchestrator | Monday 09 February 2026 05:10:47 +0000 (0:00:02.140) 0:01:12.674 ******* 2026-02-09 05:10:47.912764 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:10:47.912802 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:47.912817 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:47.912831 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:10:47.912845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:10:47.912872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:47.912886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:47.912900 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:10:47.912914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:10:47.912932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:47.912946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:47.912959 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:10:47.912981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:10:56.674174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:56.674342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:56.674362 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:10:56.674376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:10:56.674389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:56.674400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:56.674412 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:10:56.674435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:10:56.674448 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:56.674482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:56.674535 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:10:56.674548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:10:56.674559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:56.674571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:10:56.674585 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:10:56.674598 | orchestrator | 2026-02-09 05:10:56.674612 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:10:56.674626 | orchestrator | Monday 09 February 2026 05:10:50 +0000 (0:00:02.933) 0:01:15.608 ******* 2026-02-09 05:10:56.674639 | orchestrator | 2026-02-09 05:10:56.674653 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:10:56.674665 | orchestrator | Monday 09 February 2026 05:10:50 +0000 (0:00:00.473) 0:01:16.081 ******* 2026-02-09 05:10:56.674677 | orchestrator | 2026-02-09 05:10:56.674690 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:10:56.674702 | orchestrator | Monday 09 February 2026 05:10:51 +0000 (0:00:00.498) 0:01:16.580 ******* 2026-02-09 05:10:56.674714 | orchestrator | 2026-02-09 05:10:56.674726 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:10:56.674738 | orchestrator | Monday 09 February 2026 05:10:51 +0000 (0:00:00.432) 0:01:17.012 ******* 2026-02-09 05:10:56.674751 | orchestrator | 2026-02-09 05:10:56.674762 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:10:56.674775 | orchestrator | Monday 09 February 2026 05:10:52 +0000 (0:00:00.428) 0:01:17.441 ******* 2026-02-09 05:10:56.674788 | orchestrator | 2026-02-09 05:10:56.674806 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:10:56.674819 | orchestrator | Monday 09 February 2026 05:10:52 +0000 (0:00:00.747) 0:01:18.188 ******* 2026-02-09 05:10:56.674831 | orchestrator | 2026-02-09 05:10:56.674844 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:10:56.674857 | orchestrator | Monday 09 February 2026 05:10:53 +0000 (0:00:00.423) 0:01:18.611 ******* 2026-02-09 05:10:56.674869 | orchestrator | 2026-02-09 05:10:56.674881 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-09 05:10:56.674893 | orchestrator | Monday 09 February 2026 05:10:53 +0000 (0:00:00.794) 0:01:19.406 ******* 2026-02-09 05:10:56.674923 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_7xqf_r9a/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_7xqf_r9a/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_7xqf_r9a/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-09 05:11:00.240231 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_mhxkod8i/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_mhxkod8i/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_mhxkod8i/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-09 05:11:00.240399 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_umjj42qv/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_umjj42qv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_umjj42qv/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-09 05:11:00.240421 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_w34by_va/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_w34by_va/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_w34by_va/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-09 05:11:00.240446 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_witk0rc9/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_witk0rc9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_witk0rc9/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-09 05:11:00.795416 | orchestrator | 2026-02-09 05:11:00 | INFO  | Task 15c94ffc-6fbe-4430-8c2a-1a8639cba31b (common) was prepared for execution. 2026-02-09 05:11:00.796014 | orchestrator | 2026-02-09 05:11:00 | INFO  | It takes a moment until task 15c94ffc-6fbe-4430-8c2a-1a8639cba31b (common) has been started and output is visible here. 2026-02-09 05:11:10.527292 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_a4ylty5i/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_a4ylty5i/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_a4ylty5i/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-09 05:11:10.527563 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_gwi_hyqq/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_gwi_hyqq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_gwi_hyqq/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-09 05:11:10.527609 | orchestrator | 2026-02-09 05:11:10.527624 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:11:10.527639 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-09 05:11:10.527653 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-09 05:11:10.527664 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-09 05:11:10.527675 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-09 05:11:10.527686 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-09 05:11:10.527697 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-09 05:11:10.527708 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-09 05:11:10.527718 | orchestrator | 2026-02-09 05:11:10.527730 | orchestrator | 2026-02-09 05:11:10.527742 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:11:10.527756 | orchestrator | Monday 09 February 2026 05:11:00 +0000 (0:00:06.250) 0:01:25.657 ******* 2026-02-09 05:11:10.527768 | orchestrator | =============================================================================== 2026-02-09 05:11:10.527781 | orchestrator | common : Restart fluentd container -------------------------------------- 6.25s 2026-02-09 05:11:10.527793 | orchestrator | common : Copying over config.json files for services -------------------- 5.11s 2026-02-09 05:11:10.527806 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.77s 2026-02-09 05:11:10.527818 | orchestrator | service-check-containers : common | Check containers -------------------- 4.55s 2026-02-09 05:11:10.527831 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.95s 2026-02-09 05:11:10.527844 | orchestrator | common : Flush handlers ------------------------------------------------- 3.80s 2026-02-09 05:11:10.527857 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.72s 2026-02-09 05:11:10.527878 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.51s 2026-02-09 05:11:10.527892 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.50s 2026-02-09 05:11:10.527905 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.31s 2026-02-09 05:11:10.527919 | orchestrator | common : include_tasks -------------------------------------------------- 3.15s 2026-02-09 05:11:10.527932 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.14s 2026-02-09 05:11:10.527945 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.09s 2026-02-09 05:11:10.527956 | orchestrator | common : include_tasks -------------------------------------------------- 3.08s 2026-02-09 05:11:10.527968 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.00s 2026-02-09 05:11:10.527980 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.93s 2026-02-09 05:11:10.527994 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.92s 2026-02-09 05:11:10.528006 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.78s 2026-02-09 05:11:10.528018 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.40s 2026-02-09 05:11:10.528030 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.26s 2026-02-09 05:11:10.528053 | orchestrator | 2026-02-09 05:11:10.528065 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-09 05:11:10.528078 | orchestrator | 2026-02-09 05:11:10.528091 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-09 05:11:10.528104 | orchestrator | Monday 09 February 2026 05:11:07 +0000 (0:00:02.190) 0:00:02.190 ******* 2026-02-09 05:11:10.528118 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 05:11:10.528132 | orchestrator | 2026-02-09 05:11:10.528155 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-09 05:11:19.479234 | orchestrator | Monday 09 February 2026 05:11:10 +0000 (0:00:03.198) 0:00:05.389 ******* 2026-02-09 05:11:19.479375 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:11:19.479394 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:11:19.479405 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:11:19.479417 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:11:19.479428 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:11:19.479439 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:11:19.479495 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:11:19.479507 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:11:19.479518 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:11:19.479529 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-09 05:11:19.479540 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:11:19.479551 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:11:19.479562 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:11:19.479573 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:11:19.479584 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:11:19.479595 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:11:19.479606 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-09 05:11:19.479616 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:11:19.479627 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:11:19.479638 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:11:19.479649 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-09 05:11:19.479660 | orchestrator | 2026-02-09 05:11:19.479672 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-09 05:11:19.479683 | orchestrator | Monday 09 February 2026 05:11:13 +0000 (0:00:03.377) 0:00:08.766 ******* 2026-02-09 05:11:19.479694 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 05:11:19.479707 | orchestrator | 2026-02-09 05:11:19.479719 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-09 05:11:19.479731 | orchestrator | Monday 09 February 2026 05:11:16 +0000 (0:00:02.968) 0:00:11.735 ******* 2026-02-09 05:11:19.479749 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:19.479788 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:19.479801 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:19.479844 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:19.479858 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:19.479869 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:19.479881 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:19.479892 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:19.479912 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:19.479924 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:19.479948 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:22.154279 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:22.154430 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:22.154501 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:22.154540 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:22.154554 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:22.154567 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:22.154578 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:22.154603 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:22.154635 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:22.154647 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:22.154659 | orchestrator | 2026-02-09 05:11:22.154672 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-09 05:11:22.154688 | orchestrator | Monday 09 February 2026 05:11:21 +0000 (0:00:04.405) 0:00:16.140 ******* 2026-02-09 05:11:22.154709 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:22.154746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:22.154770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:22.154791 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:22.154814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:22.154828 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:11:22.154852 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:24.232858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:24.232957 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:11:24.232974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:24.233010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:24.233022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:24.233034 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:11:24.233044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:24.233055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:24.233066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:24.233093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:24.233104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:24.233122 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:24.233132 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:11:24.233142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:24.233152 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:11:24.233162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:24.233217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:24.233233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:24.233251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:25.511504 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:11:25.511637 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:11:25.511651 | orchestrator | 2026-02-09 05:11:25.511663 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-09 05:11:25.511703 | orchestrator | Monday 09 February 2026 05:11:24 +0000 (0:00:02.946) 0:00:19.087 ******* 2026-02-09 05:11:25.511717 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:25.511731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:25.511743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:25.511754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:25.511765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:25.511778 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:25.511806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:25.511826 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:11:25.511836 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:25.511847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:25.511857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:25.511867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:25.511878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:25.511888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:25.511915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:25.511940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:39.426398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:39.426590 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:11:39.426607 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:11:39.426618 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:11:39.426631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:39.426643 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:11:39.426654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:39.426664 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:11:39.426675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:11:39.426686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:39.426716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:11:39.426753 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:11:39.426764 | orchestrator | 2026-02-09 05:11:39.426774 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-09 05:11:39.426785 | orchestrator | Monday 09 February 2026 05:11:27 +0000 (0:00:03.171) 0:00:22.259 ******* 2026-02-09 05:11:39.426795 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:11:39.426805 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:11:39.426814 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:11:39.426824 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:11:39.426833 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:11:39.426843 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:11:39.426852 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:11:39.426862 | orchestrator | 2026-02-09 05:11:39.426917 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-09 05:11:39.426930 | orchestrator | Monday 09 February 2026 05:11:29 +0000 (0:00:02.367) 0:00:24.626 ******* 2026-02-09 05:11:39.426941 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:11:39.426952 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:11:39.426963 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:11:39.426992 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:11:39.427005 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:11:39.427016 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:11:39.427027 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:11:39.427038 | orchestrator | 2026-02-09 05:11:39.427049 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-09 05:11:39.427061 | orchestrator | Monday 09 February 2026 05:11:31 +0000 (0:00:02.124) 0:00:26.750 ******* 2026-02-09 05:11:39.427072 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:11:39.427084 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:11:39.427094 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:11:39.427106 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:11:39.427117 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:11:39.427128 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:11:39.427140 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:11:39.427150 | orchestrator | 2026-02-09 05:11:39.427162 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-09 05:11:39.427174 | orchestrator | Monday 09 February 2026 05:11:33 +0000 (0:00:01.935) 0:00:28.686 ******* 2026-02-09 05:11:39.427185 | orchestrator | ok: [testbed-manager] 2026-02-09 05:11:39.427198 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:11:39.427210 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:11:39.427221 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:11:39.427232 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:11:39.427243 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:11:39.427256 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:11:39.427266 | orchestrator | 2026-02-09 05:11:39.427276 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-09 05:11:39.427286 | orchestrator | Monday 09 February 2026 05:11:36 +0000 (0:00:02.849) 0:00:31.535 ******* 2026-02-09 05:11:39.427297 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:39.427309 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:39.427329 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:39.427344 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:39.427355 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:39.427375 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.308984 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:42.309152 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:11:42.309170 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309215 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309247 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309271 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309286 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309325 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309338 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309358 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309370 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309381 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309392 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309404 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309444 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:11:42.309457 | orchestrator | 2026-02-09 05:11:42.309470 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-09 05:11:42.309483 | orchestrator | Monday 09 February 2026 05:11:41 +0000 (0:00:04.745) 0:00:36.281 ******* 2026-02-09 05:11:42.309496 | orchestrator | [WARNING]: Skipped 2026-02-09 05:11:42.309518 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-09 05:12:02.396996 | orchestrator | to this access issue: 2026-02-09 05:12:02.397111 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-09 05:12:02.397123 | orchestrator | directory 2026-02-09 05:12:02.397130 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 05:12:02.397137 | orchestrator | 2026-02-09 05:12:02.397143 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-09 05:12:02.397159 | orchestrator | Monday 09 February 2026 05:11:43 +0000 (0:00:02.323) 0:00:38.605 ******* 2026-02-09 05:12:02.397178 | orchestrator | [WARNING]: Skipped 2026-02-09 05:12:02.397184 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-09 05:12:02.397190 | orchestrator | to this access issue: 2026-02-09 05:12:02.397218 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-09 05:12:02.397229 | orchestrator | directory 2026-02-09 05:12:02.397237 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 05:12:02.397247 | orchestrator | 2026-02-09 05:12:02.397257 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-09 05:12:02.397266 | orchestrator | Monday 09 February 2026 05:11:45 +0000 (0:00:01.935) 0:00:40.540 ******* 2026-02-09 05:12:02.397275 | orchestrator | [WARNING]: Skipped 2026-02-09 05:12:02.397285 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-09 05:12:02.397295 | orchestrator | to this access issue: 2026-02-09 05:12:02.397304 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-09 05:12:02.397314 | orchestrator | directory 2026-02-09 05:12:02.397324 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 05:12:02.397334 | orchestrator | 2026-02-09 05:12:02.397343 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-09 05:12:02.397353 | orchestrator | Monday 09 February 2026 05:11:47 +0000 (0:00:01.862) 0:00:42.403 ******* 2026-02-09 05:12:02.397363 | orchestrator | [WARNING]: Skipped 2026-02-09 05:12:02.397372 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-09 05:12:02.397378 | orchestrator | to this access issue: 2026-02-09 05:12:02.397384 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-09 05:12:02.397431 | orchestrator | directory 2026-02-09 05:12:02.397452 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-09 05:12:02.397464 | orchestrator | 2026-02-09 05:12:02.397474 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-09 05:12:02.397484 | orchestrator | Monday 09 February 2026 05:11:49 +0000 (0:00:01.898) 0:00:44.302 ******* 2026-02-09 05:12:02.397490 | orchestrator | ok: [testbed-manager] 2026-02-09 05:12:02.397496 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:12:02.397503 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:12:02.397509 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:12:02.397515 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:12:02.397522 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:12:02.397529 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:12:02.397536 | orchestrator | 2026-02-09 05:12:02.397547 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-09 05:12:02.397558 | orchestrator | Monday 09 February 2026 05:11:53 +0000 (0:00:03.823) 0:00:48.126 ******* 2026-02-09 05:12:02.397569 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:12:02.397578 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:12:02.397618 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:12:02.397626 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:12:02.397634 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:12:02.397645 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:12:02.397652 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-09 05:12:02.397660 | orchestrator | 2026-02-09 05:12:02.397667 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-09 05:12:02.397675 | orchestrator | Monday 09 February 2026 05:11:56 +0000 (0:00:03.325) 0:00:51.452 ******* 2026-02-09 05:12:02.397683 | orchestrator | ok: [testbed-manager] 2026-02-09 05:12:02.397690 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:12:02.397697 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:12:02.397704 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:12:02.397720 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:12:02.397728 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:12:02.397735 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:12:02.397753 | orchestrator | 2026-02-09 05:12:02.397761 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-09 05:12:02.397769 | orchestrator | Monday 09 February 2026 05:11:59 +0000 (0:00:02.798) 0:00:54.251 ******* 2026-02-09 05:12:02.397778 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:02.397813 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:02.397824 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:02.397831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:02.397841 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:02.397850 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:02.397862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:02.397875 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:02.397889 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:10.876841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:10.876959 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:10.876969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:10.876975 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:10.876999 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:10.877029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:10.877036 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:10.877057 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:10.877064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:10.877070 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:10.877077 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:10.877083 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:10.877094 | orchestrator | 2026-02-09 05:12:10.877101 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-09 05:12:10.877108 | orchestrator | Monday 09 February 2026 05:12:02 +0000 (0:00:03.001) 0:00:57.252 ******* 2026-02-09 05:12:10.877115 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:12:10.877125 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:12:10.877131 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:12:10.877137 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:12:10.877142 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:12:10.877148 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:12:10.877154 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-09 05:12:10.877160 | orchestrator | 2026-02-09 05:12:10.877166 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-09 05:12:10.877171 | orchestrator | Monday 09 February 2026 05:12:05 +0000 (0:00:02.929) 0:01:00.181 ******* 2026-02-09 05:12:10.877177 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:12:10.877183 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:12:10.877189 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:12:10.877194 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:12:10.877200 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:12:10.877206 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:12:10.877211 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-09 05:12:10.877217 | orchestrator | 2026-02-09 05:12:10.877223 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-09 05:12:10.877228 | orchestrator | Monday 09 February 2026 05:12:08 +0000 (0:00:03.189) 0:01:03.371 ******* 2026-02-09 05:12:10.877240 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:12.857798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:12.857946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:12.857985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:12.858009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:12.858072 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:12.858083 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:12.858097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-09 05:12:12.858135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:12.858149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:12.858171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:12.858191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:12.858205 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:12.858215 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:12.858225 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:12.858248 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:15.622294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:15.622483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:15.622513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:15.622548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:15.622567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:12:15.622587 | orchestrator | 2026-02-09 05:12:15.622605 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-09 05:12:15.622618 | orchestrator | Monday 09 February 2026 05:12:12 +0000 (0:00:04.347) 0:01:07.718 ******* 2026-02-09 05:12:15.622630 | orchestrator | changed: [testbed-manager] => { 2026-02-09 05:12:15.622642 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:12:15.622652 | orchestrator | } 2026-02-09 05:12:15.622663 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:12:15.622674 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:12:15.622685 | orchestrator | } 2026-02-09 05:12:15.622695 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:12:15.622706 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:12:15.622722 | orchestrator | } 2026-02-09 05:12:15.622742 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:12:15.622761 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:12:15.622778 | orchestrator | } 2026-02-09 05:12:15.622797 | orchestrator | changed: [testbed-node-3] => { 2026-02-09 05:12:15.622812 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:12:15.622823 | orchestrator | } 2026-02-09 05:12:15.622835 | orchestrator | changed: [testbed-node-4] => { 2026-02-09 05:12:15.622848 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:12:15.622860 | orchestrator | } 2026-02-09 05:12:15.622872 | orchestrator | changed: [testbed-node-5] => { 2026-02-09 05:12:15.622885 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:12:15.622897 | orchestrator | } 2026-02-09 05:12:15.622910 | orchestrator | 2026-02-09 05:12:15.622923 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:12:15.622935 | orchestrator | Monday 09 February 2026 05:12:14 +0000 (0:00:02.091) 0:01:09.810 ******* 2026-02-09 05:12:15.622951 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:12:15.622996 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:15.623011 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:15.623025 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:12:15.623038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:12:15.623052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:15.623066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:15.623079 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:12:15.623092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:12:15.623105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:15.623125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:15.623146 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:12:56.819158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:12:56.819367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:56.819388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:56.819401 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:12:56.819418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:12:56.819429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:56.819461 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:56.819472 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:12:56.819481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:12:56.819512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:56.819522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:56.819531 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:12:56.819540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-09 05:12:56.819555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:56.819565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:12:56.819574 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:12:56.819592 | orchestrator | 2026-02-09 05:12:56.819602 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:12:56.819613 | orchestrator | Monday 09 February 2026 05:12:18 +0000 (0:00:03.070) 0:01:12.880 ******* 2026-02-09 05:12:56.819621 | orchestrator | 2026-02-09 05:12:56.819630 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:12:56.819639 | orchestrator | Monday 09 February 2026 05:12:18 +0000 (0:00:00.474) 0:01:13.355 ******* 2026-02-09 05:12:56.819648 | orchestrator | 2026-02-09 05:12:56.819658 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:12:56.819669 | orchestrator | Monday 09 February 2026 05:12:18 +0000 (0:00:00.467) 0:01:13.822 ******* 2026-02-09 05:12:56.819679 | orchestrator | 2026-02-09 05:12:56.819689 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:12:56.819699 | orchestrator | Monday 09 February 2026 05:12:19 +0000 (0:00:00.433) 0:01:14.256 ******* 2026-02-09 05:12:56.819709 | orchestrator | 2026-02-09 05:12:56.819720 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:12:56.819730 | orchestrator | Monday 09 February 2026 05:12:19 +0000 (0:00:00.448) 0:01:14.704 ******* 2026-02-09 05:12:56.819740 | orchestrator | 2026-02-09 05:12:56.819793 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:12:56.819807 | orchestrator | Monday 09 February 2026 05:12:20 +0000 (0:00:00.691) 0:01:15.395 ******* 2026-02-09 05:12:56.819817 | orchestrator | 2026-02-09 05:12:56.819827 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-09 05:12:56.819837 | orchestrator | Monday 09 February 2026 05:12:20 +0000 (0:00:00.418) 0:01:15.813 ******* 2026-02-09 05:12:56.819848 | orchestrator | 2026-02-09 05:12:56.819858 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-09 05:12:56.819868 | orchestrator | Monday 09 February 2026 05:12:21 +0000 (0:00:00.813) 0:01:16.627 ******* 2026-02-09 05:12:56.819877 | orchestrator | changed: [testbed-manager] 2026-02-09 05:12:56.819887 | orchestrator | changed: [testbed-node-3] 2026-02-09 05:12:56.819898 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:12:56.819908 | orchestrator | changed: [testbed-node-5] 2026-02-09 05:12:56.819918 | orchestrator | changed: [testbed-node-4] 2026-02-09 05:12:56.819928 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:12:56.819945 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:13:48.527247 | orchestrator | 2026-02-09 05:13:48.527380 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-09 05:13:48.527391 | orchestrator | Monday 09 February 2026 05:12:56 +0000 (0:00:35.045) 0:01:51.673 ******* 2026-02-09 05:13:48.527397 | orchestrator | changed: [testbed-node-3] 2026-02-09 05:13:48.527405 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:13:48.527412 | orchestrator | changed: [testbed-node-5] 2026-02-09 05:13:48.527418 | orchestrator | changed: [testbed-manager] 2026-02-09 05:13:48.527424 | orchestrator | changed: [testbed-node-4] 2026-02-09 05:13:48.527430 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:13:48.527436 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:13:48.527443 | orchestrator | 2026-02-09 05:13:48.527449 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-09 05:13:48.527455 | orchestrator | Monday 09 February 2026 05:13:32 +0000 (0:00:36.055) 0:02:27.729 ******* 2026-02-09 05:13:48.527461 | orchestrator | ok: [testbed-manager] 2026-02-09 05:13:48.527468 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:13:48.527474 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:13:48.527480 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:13:48.527486 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:13:48.527492 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:13:48.527498 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:13:48.527504 | orchestrator | 2026-02-09 05:13:48.527510 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-09 05:13:48.527517 | orchestrator | Monday 09 February 2026 05:13:35 +0000 (0:00:02.988) 0:02:30.717 ******* 2026-02-09 05:13:48.527545 | orchestrator | changed: [testbed-manager] 2026-02-09 05:13:48.527551 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:13:48.527557 | orchestrator | changed: [testbed-node-3] 2026-02-09 05:13:48.527563 | orchestrator | changed: [testbed-node-4] 2026-02-09 05:13:48.527570 | orchestrator | changed: [testbed-node-5] 2026-02-09 05:13:48.527575 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:13:48.527581 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:13:48.527587 | orchestrator | 2026-02-09 05:13:48.527593 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:13:48.527601 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:13:48.527609 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:13:48.527629 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:13:48.527637 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:13:48.527643 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:13:48.527649 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:13:48.527655 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:13:48.527661 | orchestrator | 2026-02-09 05:13:48.527667 | orchestrator | 2026-02-09 05:13:48.527673 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:13:48.527679 | orchestrator | Monday 09 February 2026 05:13:47 +0000 (0:00:12.103) 0:02:42.821 ******* 2026-02-09 05:13:48.527685 | orchestrator | =============================================================================== 2026-02-09 05:13:48.527691 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 36.06s 2026-02-09 05:13:48.527697 | orchestrator | common : Restart fluentd container ------------------------------------- 35.05s 2026-02-09 05:13:48.527701 | orchestrator | common : Restart cron container ---------------------------------------- 12.10s 2026-02-09 05:13:48.527705 | orchestrator | common : Copying over config.json files for services -------------------- 4.75s 2026-02-09 05:13:48.527709 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.41s 2026-02-09 05:13:48.527712 | orchestrator | service-check-containers : common | Check containers -------------------- 4.35s 2026-02-09 05:13:48.527716 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.82s 2026-02-09 05:13:48.527721 | orchestrator | common : Flush handlers ------------------------------------------------- 3.75s 2026-02-09 05:13:48.527728 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.38s 2026-02-09 05:13:48.527733 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.33s 2026-02-09 05:13:48.527739 | orchestrator | common : include_tasks -------------------------------------------------- 3.20s 2026-02-09 05:13:48.527745 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.19s 2026-02-09 05:13:48.527751 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.17s 2026-02-09 05:13:48.527757 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.07s 2026-02-09 05:13:48.527763 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.00s 2026-02-09 05:13:48.527770 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.99s 2026-02-09 05:13:48.527776 | orchestrator | common : include_tasks -------------------------------------------------- 2.97s 2026-02-09 05:13:48.527805 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.95s 2026-02-09 05:13:48.527813 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.93s 2026-02-09 05:13:48.527820 | orchestrator | common : Copying over kolla.target -------------------------------------- 2.85s 2026-02-09 05:13:48.845390 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-09 05:13:51.038888 | orchestrator | 2026-02-09 05:13:51 | INFO  | Task bf08012c-b266-4dcc-8072-1c4ad0afc210 (loadbalancer) was prepared for execution. 2026-02-09 05:13:51.038987 | orchestrator | 2026-02-09 05:13:51 | INFO  | It takes a moment until task bf08012c-b266-4dcc-8072-1c4ad0afc210 (loadbalancer) has been started and output is visible here. 2026-02-09 05:14:25.851571 | orchestrator | 2026-02-09 05:14:25.851672 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 05:14:25.851685 | orchestrator | 2026-02-09 05:14:25.851692 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 05:14:25.851699 | orchestrator | Monday 09 February 2026 05:13:57 +0000 (0:00:01.980) 0:00:01.980 ******* 2026-02-09 05:14:25.851706 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:14:25.851714 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:14:25.851720 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:14:25.851726 | orchestrator | 2026-02-09 05:14:25.851733 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 05:14:25.851739 | orchestrator | Monday 09 February 2026 05:13:59 +0000 (0:00:01.789) 0:00:03.770 ******* 2026-02-09 05:14:25.851746 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-09 05:14:25.851753 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-09 05:14:25.851759 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-09 05:14:25.851765 | orchestrator | 2026-02-09 05:14:25.851771 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-09 05:14:25.851778 | orchestrator | 2026-02-09 05:14:25.851784 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-09 05:14:25.851791 | orchestrator | Monday 09 February 2026 05:14:02 +0000 (0:00:02.691) 0:00:06.461 ******* 2026-02-09 05:14:25.851797 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:14:25.851804 | orchestrator | 2026-02-09 05:14:25.851824 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-09 05:14:25.851831 | orchestrator | Monday 09 February 2026 05:14:04 +0000 (0:00:01.993) 0:00:08.454 ******* 2026-02-09 05:14:25.851845 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:14:25.851851 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:14:25.851858 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:14:25.851864 | orchestrator | 2026-02-09 05:14:25.851870 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-09 05:14:25.851876 | orchestrator | Monday 09 February 2026 05:14:06 +0000 (0:00:01.952) 0:00:10.406 ******* 2026-02-09 05:14:25.851882 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:14:25.851889 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:14:25.851895 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:14:25.851901 | orchestrator | 2026-02-09 05:14:25.851907 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-09 05:14:25.851914 | orchestrator | Monday 09 February 2026 05:14:08 +0000 (0:00:02.035) 0:00:12.442 ******* 2026-02-09 05:14:25.851920 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:14:25.851926 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:14:25.851932 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:14:25.851938 | orchestrator | 2026-02-09 05:14:25.851945 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-09 05:14:25.851951 | orchestrator | Monday 09 February 2026 05:14:10 +0000 (0:00:01.675) 0:00:14.118 ******* 2026-02-09 05:14:25.851957 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:14:25.851980 | orchestrator | 2026-02-09 05:14:25.851986 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-09 05:14:25.851993 | orchestrator | Monday 09 February 2026 05:14:12 +0000 (0:00:01.993) 0:00:16.111 ******* 2026-02-09 05:14:25.851999 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:14:25.852005 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:14:25.852011 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:14:25.852018 | orchestrator | 2026-02-09 05:14:25.852024 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-09 05:14:25.852030 | orchestrator | Monday 09 February 2026 05:14:13 +0000 (0:00:01.664) 0:00:17.776 ******* 2026-02-09 05:14:25.852037 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-09 05:14:25.852043 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-09 05:14:25.852049 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-09 05:14:25.852055 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-09 05:14:25.852061 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-09 05:14:25.852068 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-09 05:14:25.852074 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-09 05:14:25.852081 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-09 05:14:25.852087 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-09 05:14:25.852094 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-09 05:14:25.852100 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-09 05:14:25.852106 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-09 05:14:25.852114 | orchestrator | 2026-02-09 05:14:25.852121 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-09 05:14:25.852128 | orchestrator | Monday 09 February 2026 05:14:16 +0000 (0:00:03.219) 0:00:20.995 ******* 2026-02-09 05:14:25.852136 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-09 05:14:25.852144 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-09 05:14:25.852151 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-09 05:14:25.852158 | orchestrator | 2026-02-09 05:14:25.852165 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-09 05:14:25.852185 | orchestrator | Monday 09 February 2026 05:14:18 +0000 (0:00:01.936) 0:00:22.932 ******* 2026-02-09 05:14:25.852192 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-09 05:14:25.852199 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-09 05:14:25.852206 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-09 05:14:25.852213 | orchestrator | 2026-02-09 05:14:25.852242 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-09 05:14:25.852249 | orchestrator | Monday 09 February 2026 05:14:21 +0000 (0:00:02.277) 0:00:25.209 ******* 2026-02-09 05:14:25.852257 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-09 05:14:25.852264 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:14:25.852272 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-09 05:14:25.852279 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:14:25.852286 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-09 05:14:25.852293 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:14:25.852299 | orchestrator | 2026-02-09 05:14:25.852305 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-09 05:14:25.852311 | orchestrator | Monday 09 February 2026 05:14:23 +0000 (0:00:01.902) 0:00:27.112 ******* 2026-02-09 05:14:25.852329 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-09 05:14:25.852341 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-09 05:14:25.852348 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-09 05:14:25.852355 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:14:25.852361 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:14:25.852372 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:14:36.916989 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:14:36.917183 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:14:36.917204 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:14:36.917250 | orchestrator | 2026-02-09 05:14:36.917264 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-09 05:14:36.917277 | orchestrator | Monday 09 February 2026 05:14:25 +0000 (0:00:02.735) 0:00:29.848 ******* 2026-02-09 05:14:36.917288 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:14:36.917301 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:14:36.917313 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:14:36.917324 | orchestrator | 2026-02-09 05:14:36.917335 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-09 05:14:36.917346 | orchestrator | Monday 09 February 2026 05:14:27 +0000 (0:00:02.037) 0:00:31.885 ******* 2026-02-09 05:14:36.917357 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-09 05:14:36.917370 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-09 05:14:36.917381 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-09 05:14:36.917391 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-09 05:14:36.917402 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-09 05:14:36.917413 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-09 05:14:36.917424 | orchestrator | 2026-02-09 05:14:36.917435 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-09 05:14:36.917446 | orchestrator | Monday 09 February 2026 05:14:30 +0000 (0:00:02.779) 0:00:34.664 ******* 2026-02-09 05:14:36.917457 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:14:36.917468 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:14:36.917478 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:14:36.917489 | orchestrator | 2026-02-09 05:14:36.917500 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-09 05:14:36.917511 | orchestrator | Monday 09 February 2026 05:14:32 +0000 (0:00:02.288) 0:00:36.953 ******* 2026-02-09 05:14:36.917522 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:14:36.917533 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:14:36.917543 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:14:36.917554 | orchestrator | 2026-02-09 05:14:36.917565 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-09 05:14:36.917576 | orchestrator | Monday 09 February 2026 05:14:35 +0000 (0:00:02.255) 0:00:39.208 ******* 2026-02-09 05:14:36.917588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 05:14:36.917651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:14:36.917670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:14:36.917685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 05:14:36.917697 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:14:36.917709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 05:14:36.917721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:14:36.917733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:14:36.917754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 05:14:36.917765 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:14:36.917790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 05:14:40.982780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:14:40.982886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:14:40.982903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 05:14:40.982917 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:14:40.982930 | orchestrator | 2026-02-09 05:14:40.982942 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-09 05:14:40.982977 | orchestrator | Monday 09 February 2026 05:14:36 +0000 (0:00:01.702) 0:00:40.910 ******* 2026-02-09 05:14:40.982989 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-09 05:14:40.983001 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-09 05:14:40.983013 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-09 05:14:40.983042 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:14:40.983055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:14:40.983066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 05:14:40.983084 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:14:40.983114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:14:40.983126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 05:14:40.983150 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:14:54.705249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:14:54.705370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d', '__omit_place_holder__3cd7314166cbd3e01dc2ae081c474d631b36fb9d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-09 05:14:54.705388 | orchestrator | 2026-02-09 05:14:54.705427 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-09 05:14:54.705442 | orchestrator | Monday 09 February 2026 05:14:40 +0000 (0:00:04.068) 0:00:44.979 ******* 2026-02-09 05:14:54.705454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-09 05:14:54.705467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-09 05:14:54.705479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-09 05:14:54.705505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:14:54.705535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:14:54.705547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:14:54.705568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:14:54.705580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:14:54.705592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:14:54.705603 | orchestrator | 2026-02-09 05:14:54.705614 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-09 05:14:54.705625 | orchestrator | Monday 09 February 2026 05:14:45 +0000 (0:00:04.758) 0:00:49.738 ******* 2026-02-09 05:14:54.705636 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-09 05:14:54.705648 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-09 05:14:54.705659 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-09 05:14:54.705670 | orchestrator | 2026-02-09 05:14:54.705681 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-09 05:14:54.705692 | orchestrator | Monday 09 February 2026 05:14:48 +0000 (0:00:02.709) 0:00:52.448 ******* 2026-02-09 05:14:54.705703 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-09 05:14:54.705718 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-09 05:14:54.705729 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-09 05:14:54.705740 | orchestrator | 2026-02-09 05:14:54.705751 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-09 05:14:54.705764 | orchestrator | Monday 09 February 2026 05:14:52 +0000 (0:00:04.359) 0:00:56.807 ******* 2026-02-09 05:14:54.705778 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:14:54.705793 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:14:54.705813 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:15:15.626405 | orchestrator | 2026-02-09 05:15:15.626525 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-09 05:15:15.626544 | orchestrator | Monday 09 February 2026 05:14:54 +0000 (0:00:01.895) 0:00:58.703 ******* 2026-02-09 05:15:15.626557 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-09 05:15:15.626596 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-09 05:15:15.626607 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-09 05:15:15.626618 | orchestrator | 2026-02-09 05:15:15.626630 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-09 05:15:15.626641 | orchestrator | Monday 09 February 2026 05:14:57 +0000 (0:00:03.101) 0:01:01.804 ******* 2026-02-09 05:15:15.626651 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-09 05:15:15.626663 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-09 05:15:15.626674 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-09 05:15:15.626685 | orchestrator | 2026-02-09 05:15:15.626696 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-09 05:15:15.626707 | orchestrator | Monday 09 February 2026 05:15:00 +0000 (0:00:02.784) 0:01:04.589 ******* 2026-02-09 05:15:15.626718 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:15:15.626728 | orchestrator | 2026-02-09 05:15:15.626739 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-09 05:15:15.626750 | orchestrator | Monday 09 February 2026 05:15:02 +0000 (0:00:01.960) 0:01:06.550 ******* 2026-02-09 05:15:15.626761 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-09 05:15:15.626772 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-09 05:15:15.626783 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-09 05:15:15.626794 | orchestrator | 2026-02-09 05:15:15.626805 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-09 05:15:15.626816 | orchestrator | Monday 09 February 2026 05:15:05 +0000 (0:00:02.673) 0:01:09.223 ******* 2026-02-09 05:15:15.626827 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-09 05:15:15.626838 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-09 05:15:15.626848 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-09 05:15:15.626864 | orchestrator | 2026-02-09 05:15:15.626884 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-09 05:15:15.626903 | orchestrator | Monday 09 February 2026 05:15:07 +0000 (0:00:02.673) 0:01:11.896 ******* 2026-02-09 05:15:15.626922 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:15:15.626942 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:15:15.626963 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:15:15.626976 | orchestrator | 2026-02-09 05:15:15.626989 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-09 05:15:15.627003 | orchestrator | Monday 09 February 2026 05:15:09 +0000 (0:00:01.394) 0:01:13.291 ******* 2026-02-09 05:15:15.627016 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:15:15.627028 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:15:15.627040 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:15:15.627053 | orchestrator | 2026-02-09 05:15:15.627065 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-09 05:15:15.627077 | orchestrator | Monday 09 February 2026 05:15:11 +0000 (0:00:02.088) 0:01:15.380 ******* 2026-02-09 05:15:15.627094 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-09 05:15:15.627136 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-09 05:15:15.627201 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-09 05:15:15.627223 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:15:15.627235 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:15:15.627246 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:15:15.627259 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:15:15.627278 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:15:15.627304 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:15:19.500628 | orchestrator | 2026-02-09 05:15:19.500736 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-09 05:15:19.500754 | orchestrator | Monday 09 February 2026 05:15:15 +0000 (0:00:04.240) 0:01:19.621 ******* 2026-02-09 05:15:19.500770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 05:15:19.500791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:15:19.500811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:15:19.500831 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:15:19.500851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 05:15:19.500871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:15:19.500941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:15:19.500960 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:15:19.501004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 05:15:19.501025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:15:19.501044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:15:19.501062 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:15:19.501082 | orchestrator | 2026-02-09 05:15:19.501100 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-09 05:15:19.501121 | orchestrator | Monday 09 February 2026 05:15:17 +0000 (0:00:01.672) 0:01:21.293 ******* 2026-02-09 05:15:19.501142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 05:15:19.501206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:15:19.501229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:15:19.501243 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:15:19.501268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 05:15:31.246135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:15:31.246273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:15:31.246289 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:15:31.246302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 05:15:31.246336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:15:31.246347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:15:31.246357 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:15:31.246368 | orchestrator | 2026-02-09 05:15:31.246379 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-09 05:15:31.246390 | orchestrator | Monday 09 February 2026 05:15:19 +0000 (0:00:02.208) 0:01:23.502 ******* 2026-02-09 05:15:31.246413 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-09 05:15:31.246425 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-09 05:15:31.246434 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-09 05:15:31.246444 | orchestrator | 2026-02-09 05:15:31.246454 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-09 05:15:31.246464 | orchestrator | Monday 09 February 2026 05:15:21 +0000 (0:00:02.511) 0:01:26.013 ******* 2026-02-09 05:15:31.246473 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-09 05:15:31.246483 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-09 05:15:31.246493 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-09 05:15:31.246503 | orchestrator | 2026-02-09 05:15:31.246528 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-09 05:15:31.246539 | orchestrator | Monday 09 February 2026 05:15:24 +0000 (0:00:02.526) 0:01:28.540 ******* 2026-02-09 05:15:31.246548 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-09 05:15:31.246558 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-09 05:15:31.246568 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-09 05:15:31.246578 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:15:31.246591 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-09 05:15:31.246602 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-09 05:15:31.246613 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:15:31.246625 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-09 05:15:31.246637 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:15:31.246648 | orchestrator | 2026-02-09 05:15:31.246660 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-09 05:15:31.246678 | orchestrator | Monday 09 February 2026 05:15:27 +0000 (0:00:02.532) 0:01:31.072 ******* 2026-02-09 05:15:31.246690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-09 05:15:31.246703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-09 05:15:31.246714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-09 05:15:31.246732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:15:31.246749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:15:35.017747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:15:35.017912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:15:35.017928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:15:35.017939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:15:35.017949 | orchestrator | 2026-02-09 05:15:35.017961 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-09 05:15:35.017972 | orchestrator | Monday 09 February 2026 05:15:31 +0000 (0:00:04.170) 0:01:35.243 ******* 2026-02-09 05:15:35.017983 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:15:35.017995 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:15:35.018005 | orchestrator | } 2026-02-09 05:15:35.018075 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:15:35.018087 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:15:35.018096 | orchestrator | } 2026-02-09 05:15:35.018106 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:15:35.018116 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:15:35.018125 | orchestrator | } 2026-02-09 05:15:35.018135 | orchestrator | 2026-02-09 05:15:35.018145 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:15:35.018192 | orchestrator | Monday 09 February 2026 05:15:32 +0000 (0:00:01.445) 0:01:36.688 ******* 2026-02-09 05:15:35.018204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 05:15:35.018234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:15:35.018254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:15:35.018264 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:15:35.018275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 05:15:35.018307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:15:35.018317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:15:35.018327 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:15:35.018342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 05:15:35.018353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:15:35.018378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:15:40.512929 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:15:40.513055 | orchestrator | 2026-02-09 05:15:40.513070 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-09 05:15:40.513081 | orchestrator | Monday 09 February 2026 05:15:34 +0000 (0:00:02.323) 0:01:39.011 ******* 2026-02-09 05:15:40.513091 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:15:40.513100 | orchestrator | 2026-02-09 05:15:40.513109 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-09 05:15:40.513118 | orchestrator | Monday 09 February 2026 05:15:37 +0000 (0:00:02.020) 0:01:41.032 ******* 2026-02-09 05:15:40.513133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:15:40.513148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 05:15:40.513201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:15:40.513232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 05:15:40.513283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:15:40.513294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 05:15:40.513304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:15:40.513313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:15:40.513328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 05:15:40.513344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 05:15:40.513359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:15:42.298529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 05:15:42.298696 | orchestrator | 2026-02-09 05:15:42.298721 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-09 05:15:42.298741 | orchestrator | Monday 09 February 2026 05:15:41 +0000 (0:00:04.645) 0:01:45.678 ******* 2026-02-09 05:15:42.298762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:15:42.298784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 05:15:42.298828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:15:42.298880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 05:15:42.298898 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:15:42.298945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:15:42.298965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 05:15:42.298982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:15:42.298999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 05:15:42.299016 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:15:42.299054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:15:42.299074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-09 05:15:42.299101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:15:57.170629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-09 05:15:57.170740 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:15:57.170756 | orchestrator | 2026-02-09 05:15:57.170767 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-09 05:15:57.170779 | orchestrator | Monday 09 February 2026 05:15:43 +0000 (0:00:01.723) 0:01:47.401 ******* 2026-02-09 05:15:57.170790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:15:57.170803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:15:57.170814 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:15:57.170824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:15:57.170857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:15:57.170868 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:15:57.170878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:15:57.170907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:15:57.170925 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:15:57.170941 | orchestrator | 2026-02-09 05:15:57.170959 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-09 05:15:57.170975 | orchestrator | Monday 09 February 2026 05:15:45 +0000 (0:00:02.221) 0:01:49.623 ******* 2026-02-09 05:15:57.170991 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:15:57.171009 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:15:57.171025 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:15:57.171042 | orchestrator | 2026-02-09 05:15:57.171058 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-09 05:15:57.171074 | orchestrator | Monday 09 February 2026 05:15:47 +0000 (0:00:02.194) 0:01:51.817 ******* 2026-02-09 05:15:57.171091 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:15:57.171107 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:15:57.171123 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:15:57.171184 | orchestrator | 2026-02-09 05:15:57.171201 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-09 05:15:57.171216 | orchestrator | Monday 09 February 2026 05:15:50 +0000 (0:00:02.905) 0:01:54.723 ******* 2026-02-09 05:15:57.171232 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:15:57.171248 | orchestrator | 2026-02-09 05:15:57.171265 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-09 05:15:57.171282 | orchestrator | Monday 09 February 2026 05:15:52 +0000 (0:00:01.710) 0:01:56.434 ******* 2026-02-09 05:15:57.171326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:15:57.171343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:15:57.171368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:15:57.171387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:15:57.171399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:15:57.171418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:15:58.829311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:15:58.829455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:15:58.829488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:15:58.829502 | orchestrator | 2026-02-09 05:15:58.829515 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-09 05:15:58.829528 | orchestrator | Monday 09 February 2026 05:15:57 +0000 (0:00:04.733) 0:02:01.168 ******* 2026-02-09 05:15:58.829543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:15:58.829557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:15:58.829589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:15:58.829610 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:15:58.829623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:15:58.829641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:15:58.829653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:15:58.829665 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:15:58.829677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:15:58.829697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-09 05:16:15.672152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:16:15.672263 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:16:15.672280 | orchestrator | 2026-02-09 05:16:15.672292 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-09 05:16:15.672304 | orchestrator | Monday 09 February 2026 05:15:58 +0000 (0:00:01.662) 0:02:02.830 ******* 2026-02-09 05:16:15.672315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:15.672344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:15.672356 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:16:15.672366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:15.672376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:15.672386 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:16:15.672396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:15.672406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:15.672416 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:16:15.672426 | orchestrator | 2026-02-09 05:16:15.672436 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-09 05:16:15.672446 | orchestrator | Monday 09 February 2026 05:16:00 +0000 (0:00:02.014) 0:02:04.844 ******* 2026-02-09 05:16:15.672455 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:16:15.672466 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:16:15.672476 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:16:15.672485 | orchestrator | 2026-02-09 05:16:15.672495 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-09 05:16:15.672528 | orchestrator | Monday 09 February 2026 05:16:03 +0000 (0:00:02.325) 0:02:07.170 ******* 2026-02-09 05:16:15.672538 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:16:15.672548 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:16:15.672557 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:16:15.672567 | orchestrator | 2026-02-09 05:16:15.672577 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-09 05:16:15.672586 | orchestrator | Monday 09 February 2026 05:16:06 +0000 (0:00:02.907) 0:02:10.077 ******* 2026-02-09 05:16:15.672596 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:16:15.672606 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:16:15.672615 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:16:15.672625 | orchestrator | 2026-02-09 05:16:15.672634 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-09 05:16:15.672644 | orchestrator | Monday 09 February 2026 05:16:07 +0000 (0:00:01.372) 0:02:11.449 ******* 2026-02-09 05:16:15.672656 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:16:15.672667 | orchestrator | 2026-02-09 05:16:15.672678 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-09 05:16:15.672689 | orchestrator | Monday 09 February 2026 05:16:09 +0000 (0:00:01.703) 0:02:13.154 ******* 2026-02-09 05:16:15.672719 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-09 05:16:15.672738 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-09 05:16:15.672750 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-09 05:16:15.672761 | orchestrator | 2026-02-09 05:16:15.672773 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-09 05:16:15.672785 | orchestrator | Monday 09 February 2026 05:16:12 +0000 (0:00:03.844) 0:02:16.998 ******* 2026-02-09 05:16:15.672804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-09 05:16:15.672816 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:16:15.672828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-09 05:16:15.672841 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:16:15.672861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-09 05:16:28.060421 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:16:28.060526 | orchestrator | 2026-02-09 05:16:28.060537 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-09 05:16:28.060546 | orchestrator | Monday 09 February 2026 05:16:15 +0000 (0:00:02.670) 0:02:19.669 ******* 2026-02-09 05:16:28.060576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 05:16:28.060586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 05:16:28.060595 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:16:28.060621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 05:16:28.060627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 05:16:28.060633 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:16:28.060639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 05:16:28.060646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-09 05:16:28.060652 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:16:28.060658 | orchestrator | 2026-02-09 05:16:28.060665 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-09 05:16:28.060671 | orchestrator | Monday 09 February 2026 05:16:18 +0000 (0:00:02.939) 0:02:22.608 ******* 2026-02-09 05:16:28.060678 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:16:28.060684 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:16:28.060691 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:16:28.060717 | orchestrator | 2026-02-09 05:16:28.060725 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-09 05:16:28.060732 | orchestrator | Monday 09 February 2026 05:16:20 +0000 (0:00:01.518) 0:02:24.127 ******* 2026-02-09 05:16:28.060739 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:16:28.060746 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:16:28.060753 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:16:28.060760 | orchestrator | 2026-02-09 05:16:28.060767 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-09 05:16:28.060773 | orchestrator | Monday 09 February 2026 05:16:22 +0000 (0:00:02.409) 0:02:26.537 ******* 2026-02-09 05:16:28.060788 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:16:28.060795 | orchestrator | 2026-02-09 05:16:28.060802 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-09 05:16:28.060809 | orchestrator | Monday 09 February 2026 05:16:24 +0000 (0:00:01.759) 0:02:28.296 ******* 2026-02-09 05:16:28.060840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:16:28.060858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:16:28.060867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 05:16:28.060876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 05:16:28.060884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:16:28.060898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:16:30.095752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 05:16:30.095872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 05:16:30.095894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:16:30.095931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:16:30.095944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 05:16:30.096004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 05:16:30.096019 | orchestrator | 2026-02-09 05:16:30.096032 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-09 05:16:30.096044 | orchestrator | Monday 09 February 2026 05:16:29 +0000 (0:00:04.872) 0:02:33.168 ******* 2026-02-09 05:16:30.096058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:16:30.096071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:16:30.096083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 05:16:30.096095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 05:16:30.096147 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:16:30.096179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:16:41.558867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:16:41.558947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 05:16:41.558954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 05:16:41.558960 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:16:41.558967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:16:41.559001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:16:41.559016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-09 05:16:41.559021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-09 05:16:41.559026 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:16:41.559031 | orchestrator | 2026-02-09 05:16:41.559035 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-09 05:16:41.559041 | orchestrator | Monday 09 February 2026 05:16:31 +0000 (0:00:02.029) 0:02:35.198 ******* 2026-02-09 05:16:41.559046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:41.559052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:41.559057 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:16:41.559062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:41.559070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:41.559074 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:16:41.559079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:41.559083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:16:41.559087 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:16:41.559091 | orchestrator | 2026-02-09 05:16:41.559095 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-09 05:16:41.559099 | orchestrator | Monday 09 February 2026 05:16:33 +0000 (0:00:02.024) 0:02:37.223 ******* 2026-02-09 05:16:41.559129 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:16:41.559136 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:16:41.559140 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:16:41.559144 | orchestrator | 2026-02-09 05:16:41.559148 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-09 05:16:41.559155 | orchestrator | Monday 09 February 2026 05:16:35 +0000 (0:00:02.277) 0:02:39.501 ******* 2026-02-09 05:16:41.559159 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:16:41.559163 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:16:41.559167 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:16:41.559171 | orchestrator | 2026-02-09 05:16:41.559175 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-09 05:16:41.559179 | orchestrator | Monday 09 February 2026 05:16:38 +0000 (0:00:02.889) 0:02:42.390 ******* 2026-02-09 05:16:41.559183 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:16:41.559188 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:16:41.559192 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:16:41.559196 | orchestrator | 2026-02-09 05:16:41.559200 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-09 05:16:41.559204 | orchestrator | Monday 09 February 2026 05:16:40 +0000 (0:00:01.639) 0:02:44.030 ******* 2026-02-09 05:16:41.559208 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:16:41.559212 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:16:41.559219 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:16:47.132416 | orchestrator | 2026-02-09 05:16:47.132532 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-09 05:16:47.132551 | orchestrator | Monday 09 February 2026 05:16:41 +0000 (0:00:01.531) 0:02:45.561 ******* 2026-02-09 05:16:47.132563 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:16:47.132574 | orchestrator | 2026-02-09 05:16:47.132585 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-09 05:16:47.132596 | orchestrator | Monday 09 February 2026 05:16:43 +0000 (0:00:01.915) 0:02:47.477 ******* 2026-02-09 05:16:47.132612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:16:47.132652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 05:16:47.132667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 05:16:47.132679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 05:16:47.132705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 05:16:47.132736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:16:47.132748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 05:16:47.132769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:16:47.132781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 05:16:47.132799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 05:16:47.132819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:16:48.992840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 05:16:48.992997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 05:16:48.993183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 05:16:48.993205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:16:48.993241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 05:16:48.993259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 05:16:48.993305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 05:16:48.993341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 05:16:48.993361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:16:48.993381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 05:16:48.993401 | orchestrator | 2026-02-09 05:16:48.993422 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-09 05:16:48.993443 | orchestrator | Monday 09 February 2026 05:16:48 +0000 (0:00:04.887) 0:02:52.365 ******* 2026-02-09 05:16:48.993473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:16:48.993519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 05:16:50.237422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 05:16:50.237487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 05:16:50.237494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 05:16:50.237499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:16:50.237515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 05:16:50.237520 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:16:50.237536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:16:50.237556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 05:16:50.237560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 05:16:50.237564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 05:16:50.237568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 05:16:50.237572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:16:50.237576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 05:16:50.237584 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:16:50.237592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:17:05.716179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-09 05:17:05.716296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-09 05:17:05.716312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-09 05:17:05.716324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-09 05:17:05.716359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:17:05.716370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-09 05:17:05.716381 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:05.716393 | orchestrator | 2026-02-09 05:17:05.716404 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-09 05:17:05.716415 | orchestrator | Monday 09 February 2026 05:16:50 +0000 (0:00:01.875) 0:02:54.240 ******* 2026-02-09 05:17:05.716444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:05.716457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:05.716468 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:17:05.716478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:05.716488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:05.716498 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:17:05.716508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:05.716517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:05.716527 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:05.716537 | orchestrator | 2026-02-09 05:17:05.716547 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-09 05:17:05.716557 | orchestrator | Monday 09 February 2026 05:16:52 +0000 (0:00:02.112) 0:02:56.353 ******* 2026-02-09 05:17:05.716566 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:17:05.716576 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:17:05.716592 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:17:05.716609 | orchestrator | 2026-02-09 05:17:05.716624 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-09 05:17:05.716639 | orchestrator | Monday 09 February 2026 05:16:54 +0000 (0:00:02.236) 0:02:58.589 ******* 2026-02-09 05:17:05.716666 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:17:05.716683 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:17:05.716698 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:17:05.716713 | orchestrator | 2026-02-09 05:17:05.716728 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-09 05:17:05.716744 | orchestrator | Monday 09 February 2026 05:16:57 +0000 (0:00:02.870) 0:03:01.460 ******* 2026-02-09 05:17:05.716760 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:17:05.716775 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:17:05.716792 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:05.716809 | orchestrator | 2026-02-09 05:17:05.716825 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-09 05:17:05.716840 | orchestrator | Monday 09 February 2026 05:16:58 +0000 (0:00:01.305) 0:03:02.766 ******* 2026-02-09 05:17:05.716856 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:17:05.716871 | orchestrator | 2026-02-09 05:17:05.716887 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-09 05:17:05.716901 | orchestrator | Monday 09 February 2026 05:17:01 +0000 (0:00:02.280) 0:03:05.047 ******* 2026-02-09 05:17:05.716944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 05:17:06.860162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 05:17:06.860272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 05:17:06.860293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 05:17:06.860306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-09 05:17:06.860316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 05:17:10.439454 | orchestrator | 2026-02-09 05:17:10.439561 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-09 05:17:10.439578 | orchestrator | Monday 09 February 2026 05:17:06 +0000 (0:00:05.823) 0:03:10.870 ******* 2026-02-09 05:17:10.439616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 05:17:10.439636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 05:17:10.439672 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:17:10.439711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 05:17:10.439728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 05:17:10.439748 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:17:10.439770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-09 05:17:29.228752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-09 05:17:29.228892 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:29.228915 | orchestrator | 2026-02-09 05:17:29.228929 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-09 05:17:29.228968 | orchestrator | Monday 09 February 2026 05:17:11 +0000 (0:00:04.754) 0:03:15.624 ******* 2026-02-09 05:17:29.228990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 05:17:29.229032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 05:17:29.229053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 05:17:29.229105 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:17:29.229160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 05:17:29.229182 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:17:29.229201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 05:17:29.229222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-09 05:17:29.229240 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:29.229259 | orchestrator | 2026-02-09 05:17:29.229278 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-09 05:17:29.229298 | orchestrator | Monday 09 February 2026 05:17:16 +0000 (0:00:04.712) 0:03:20.337 ******* 2026-02-09 05:17:29.229318 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:17:29.229339 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:17:29.229373 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:17:29.229387 | orchestrator | 2026-02-09 05:17:29.229400 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-09 05:17:29.229413 | orchestrator | Monday 09 February 2026 05:17:18 +0000 (0:00:02.279) 0:03:22.616 ******* 2026-02-09 05:17:29.229425 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:17:29.229438 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:17:29.229450 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:17:29.229463 | orchestrator | 2026-02-09 05:17:29.229475 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-09 05:17:29.229487 | orchestrator | Monday 09 February 2026 05:17:21 +0000 (0:00:02.892) 0:03:25.509 ******* 2026-02-09 05:17:29.229500 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:17:29.229513 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:17:29.229525 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:29.229537 | orchestrator | 2026-02-09 05:17:29.229552 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-09 05:17:29.229571 | orchestrator | Monday 09 February 2026 05:17:22 +0000 (0:00:01.373) 0:03:26.882 ******* 2026-02-09 05:17:29.229589 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:17:29.229608 | orchestrator | 2026-02-09 05:17:29.229627 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-09 05:17:29.229645 | orchestrator | Monday 09 February 2026 05:17:24 +0000 (0:00:01.699) 0:03:28.581 ******* 2026-02-09 05:17:29.229666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:17:29.229691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:17:46.106386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:17:46.106528 | orchestrator | 2026-02-09 05:17:46.106547 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-09 05:17:46.106583 | orchestrator | Monday 09 February 2026 05:17:29 +0000 (0:00:04.648) 0:03:33.230 ******* 2026-02-09 05:17:46.106597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:17:46.106610 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:17:46.106623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:17:46.106634 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:17:46.106646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:17:46.106657 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:46.106668 | orchestrator | 2026-02-09 05:17:46.106680 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-09 05:17:46.106691 | orchestrator | Monday 09 February 2026 05:17:30 +0000 (0:00:01.758) 0:03:34.989 ******* 2026-02-09 05:17:46.106703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:46.106717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:46.106730 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:17:46.106773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:46.106786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:46.106805 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:17:46.106816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:46.106827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:17:46.106838 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:46.106849 | orchestrator | 2026-02-09 05:17:46.106860 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-09 05:17:46.106872 | orchestrator | Monday 09 February 2026 05:17:32 +0000 (0:00:01.492) 0:03:36.481 ******* 2026-02-09 05:17:46.106885 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:17:46.106899 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:17:46.106911 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:17:46.106924 | orchestrator | 2026-02-09 05:17:46.106937 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-09 05:17:46.106950 | orchestrator | Monday 09 February 2026 05:17:34 +0000 (0:00:02.358) 0:03:38.840 ******* 2026-02-09 05:17:46.106962 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:17:46.106976 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:17:46.106989 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:17:46.107001 | orchestrator | 2026-02-09 05:17:46.107013 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-09 05:17:46.107026 | orchestrator | Monday 09 February 2026 05:17:37 +0000 (0:00:03.014) 0:03:41.855 ******* 2026-02-09 05:17:46.107039 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:17:46.107053 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:17:46.107098 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:46.107111 | orchestrator | 2026-02-09 05:17:46.107123 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-09 05:17:46.107136 | orchestrator | Monday 09 February 2026 05:17:39 +0000 (0:00:01.355) 0:03:43.210 ******* 2026-02-09 05:17:46.107148 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:17:46.107161 | orchestrator | 2026-02-09 05:17:46.107173 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-09 05:17:46.107185 | orchestrator | Monday 09 February 2026 05:17:40 +0000 (0:00:01.775) 0:03:44.985 ******* 2026-02-09 05:17:46.107220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 05:17:47.822779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 05:17:47.822922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-09 05:17:47.822982 | orchestrator | 2026-02-09 05:17:47.822997 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-09 05:17:47.823011 | orchestrator | Monday 09 February 2026 05:17:46 +0000 (0:00:05.126) 0:03:50.111 ******* 2026-02-09 05:17:47.823027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 05:17:47.823041 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:17:47.823121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 05:17:56.630569 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:17:56.630688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-09 05:17:56.630734 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:56.630748 | orchestrator | 2026-02-09 05:17:56.630761 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-09 05:17:56.630773 | orchestrator | Monday 09 February 2026 05:17:47 +0000 (0:00:01.715) 0:03:51.827 ******* 2026-02-09 05:17:56.630786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-09 05:17:56.630800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 05:17:56.630813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-09 05:17:56.630826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 05:17:56.630838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-09 05:17:56.630850 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:17:56.630879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-09 05:17:56.630891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 05:17:56.630902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-09 05:17:56.630992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-09 05:17:56.631011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 05:17:56.631032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 05:17:56.631044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-09 05:17:56.631269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-09 05:17:56.631435 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:17:56.631486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-09 05:17:56.631501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-09 05:17:56.631513 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:56.631524 | orchestrator | 2026-02-09 05:17:56.631537 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-09 05:17:56.631550 | orchestrator | Monday 09 February 2026 05:17:49 +0000 (0:00:01.938) 0:03:53.766 ******* 2026-02-09 05:17:56.631561 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:17:56.631573 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:17:56.631583 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:17:56.631595 | orchestrator | 2026-02-09 05:17:56.631606 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-09 05:17:56.631617 | orchestrator | Monday 09 February 2026 05:17:52 +0000 (0:00:02.344) 0:03:56.111 ******* 2026-02-09 05:17:56.631628 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:17:56.631658 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:17:56.631681 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:17:56.631692 | orchestrator | 2026-02-09 05:17:56.631703 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-09 05:17:56.631714 | orchestrator | Monday 09 February 2026 05:17:54 +0000 (0:00:02.860) 0:03:58.971 ******* 2026-02-09 05:17:56.631725 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:17:56.631735 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:17:56.631746 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:17:56.631757 | orchestrator | 2026-02-09 05:17:56.631768 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-09 05:17:56.631779 | orchestrator | Monday 09 February 2026 05:17:56 +0000 (0:00:01.457) 0:04:00.429 ******* 2026-02-09 05:17:56.631829 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:18:06.780155 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:18:06.780322 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:18:06.780339 | orchestrator | 2026-02-09 05:18:06.780353 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-09 05:18:06.780366 | orchestrator | Monday 09 February 2026 05:17:57 +0000 (0:00:01.349) 0:04:01.778 ******* 2026-02-09 05:18:06.780402 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:18:06.780462 | orchestrator | 2026-02-09 05:18:06.780476 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-09 05:18:06.780487 | orchestrator | Monday 09 February 2026 05:17:59 +0000 (0:00:02.113) 0:04:03.892 ******* 2026-02-09 05:18:06.780534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-09 05:18:06.780553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-09 05:18:06.780586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 05:18:06.780600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 05:18:06.780634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 05:18:06.780657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-09 05:18:06.780672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 05:18:06.780685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 05:18:06.780705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 05:18:06.780718 | orchestrator | 2026-02-09 05:18:06.780732 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-09 05:18:06.780746 | orchestrator | Monday 09 February 2026 05:18:04 +0000 (0:00:04.807) 0:04:08.699 ******* 2026-02-09 05:18:06.780769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-09 05:18:08.632729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 05:18:08.632872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 05:18:08.632889 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:18:08.632929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-09 05:18:08.632945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 05:18:08.632957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 05:18:08.632995 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:18:08.633029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-09 05:18:08.633043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-09 05:18:08.633103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-09 05:18:08.633114 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:18:08.633125 | orchestrator | 2026-02-09 05:18:08.633138 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-09 05:18:08.633150 | orchestrator | Monday 09 February 2026 05:18:06 +0000 (0:00:02.078) 0:04:10.778 ******* 2026-02-09 05:18:08.633170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-09 05:18:08.633186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-09 05:18:08.633199 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:18:08.633210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-09 05:18:08.633233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-09 05:18:08.633245 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:18:08.633259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-09 05:18:08.633273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-09 05:18:08.633287 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:18:08.633299 | orchestrator | 2026-02-09 05:18:08.633313 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-09 05:18:08.633335 | orchestrator | Monday 09 February 2026 05:18:08 +0000 (0:00:01.848) 0:04:12.626 ******* 2026-02-09 05:18:24.842277 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:18:24.842384 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:18:24.842397 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:18:24.842407 | orchestrator | 2026-02-09 05:18:24.842417 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-09 05:18:24.842427 | orchestrator | Monday 09 February 2026 05:18:10 +0000 (0:00:02.374) 0:04:15.000 ******* 2026-02-09 05:18:24.842436 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:18:24.842445 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:18:24.842454 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:18:24.842462 | orchestrator | 2026-02-09 05:18:24.842472 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-09 05:18:24.842481 | orchestrator | Monday 09 February 2026 05:18:14 +0000 (0:00:03.370) 0:04:18.371 ******* 2026-02-09 05:18:24.842490 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:18:24.842499 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:18:24.842508 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:18:24.842517 | orchestrator | 2026-02-09 05:18:24.842525 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-09 05:18:24.842534 | orchestrator | Monday 09 February 2026 05:18:15 +0000 (0:00:01.483) 0:04:19.854 ******* 2026-02-09 05:18:24.842543 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:18:24.842552 | orchestrator | 2026-02-09 05:18:24.842562 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-09 05:18:24.842570 | orchestrator | Monday 09 February 2026 05:18:17 +0000 (0:00:01.846) 0:04:21.701 ******* 2026-02-09 05:18:24.842585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:18:24.842638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:18:24.842650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:18:24.842678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:18:24.842688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:18:24.842702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:18:24.842719 | orchestrator | 2026-02-09 05:18:24.842728 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-09 05:18:24.842738 | orchestrator | Monday 09 February 2026 05:18:23 +0000 (0:00:05.380) 0:04:27.082 ******* 2026-02-09 05:18:24.842747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:18:24.842763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:18:38.106882 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:18:38.106997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:18:38.107021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:18:38.107085 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:18:38.107115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:18:38.107128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:18:38.107140 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:18:38.107151 | orchestrator | 2026-02-09 05:18:38.107163 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-09 05:18:38.107175 | orchestrator | Monday 09 February 2026 05:18:24 +0000 (0:00:01.761) 0:04:28.843 ******* 2026-02-09 05:18:38.107204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:38.107220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:38.107233 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:18:38.107244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:38.107256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:38.107267 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:18:38.107278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:38.107298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:38.107309 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:18:38.107320 | orchestrator | 2026-02-09 05:18:38.107331 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-09 05:18:38.107342 | orchestrator | Monday 09 February 2026 05:18:26 +0000 (0:00:02.017) 0:04:30.861 ******* 2026-02-09 05:18:38.107353 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:18:38.107364 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:18:38.107375 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:18:38.107386 | orchestrator | 2026-02-09 05:18:38.107397 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-09 05:18:38.107408 | orchestrator | Monday 09 February 2026 05:18:29 +0000 (0:00:02.334) 0:04:33.196 ******* 2026-02-09 05:18:38.107420 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:18:38.107433 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:18:38.107445 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:18:38.107458 | orchestrator | 2026-02-09 05:18:38.107476 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-09 05:18:38.107489 | orchestrator | Monday 09 February 2026 05:18:32 +0000 (0:00:03.063) 0:04:36.259 ******* 2026-02-09 05:18:38.107502 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:18:38.107514 | orchestrator | 2026-02-09 05:18:38.107527 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-09 05:18:38.107539 | orchestrator | Monday 09 February 2026 05:18:34 +0000 (0:00:02.053) 0:04:38.312 ******* 2026-02-09 05:18:38.107554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:18:38.107570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:18:38.107594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 05:18:39.836620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 05:18:39.836738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:18:39.836766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:18:39.836780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 05:18:39.836792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 05:18:39.836820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:18:39.836851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:18:39.836867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 05:18:39.836878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 05:18:39.836889 | orchestrator | 2026-02-09 05:18:39.836900 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-09 05:18:39.836911 | orchestrator | Monday 09 February 2026 05:18:39 +0000 (0:00:04.887) 0:04:43.200 ******* 2026-02-09 05:18:39.836923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:18:39.836940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:18:43.306620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 05:18:43.306710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 05:18:43.306723 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:18:43.306747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:18:43.306756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:18:43.306766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 05:18:43.306804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 05:18:43.306813 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:18:43.306821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:18:43.306833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:18:43.306841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-09 05:18:43.306849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-09 05:18:43.306862 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:18:43.306870 | orchestrator | 2026-02-09 05:18:43.306878 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-09 05:18:43.306886 | orchestrator | Monday 09 February 2026 05:18:40 +0000 (0:00:01.780) 0:04:44.980 ******* 2026-02-09 05:18:43.306896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:43.306907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:43.306916 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:18:43.306924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:43.306936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:59.148969 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:18:59.149190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:59.149224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:18:59.149245 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:18:59.149263 | orchestrator | 2026-02-09 05:18:59.149284 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-09 05:18:59.149305 | orchestrator | Monday 09 February 2026 05:18:43 +0000 (0:00:02.323) 0:04:47.303 ******* 2026-02-09 05:18:59.149325 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:18:59.149344 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:18:59.149362 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:18:59.149382 | orchestrator | 2026-02-09 05:18:59.149403 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-09 05:18:59.149422 | orchestrator | Monday 09 February 2026 05:18:45 +0000 (0:00:02.314) 0:04:49.618 ******* 2026-02-09 05:18:59.149443 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:18:59.149458 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:18:59.149471 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:18:59.149483 | orchestrator | 2026-02-09 05:18:59.149516 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-09 05:18:59.149529 | orchestrator | Monday 09 February 2026 05:18:48 +0000 (0:00:03.016) 0:04:52.635 ******* 2026-02-09 05:18:59.149542 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:18:59.149555 | orchestrator | 2026-02-09 05:18:59.149568 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-09 05:18:59.149580 | orchestrator | Monday 09 February 2026 05:18:51 +0000 (0:00:02.770) 0:04:55.405 ******* 2026-02-09 05:18:59.149593 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:18:59.149605 | orchestrator | 2026-02-09 05:18:59.149618 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-09 05:18:59.149630 | orchestrator | Monday 09 February 2026 05:18:55 +0000 (0:00:04.170) 0:04:59.576 ******* 2026-02-09 05:18:59.149650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:18:59.149717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 05:18:59.149733 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:18:59.149753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:18:59.149777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 05:18:59.149792 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:18:59.149816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:19:02.879604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 05:19:02.879706 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:02.879722 | orchestrator | 2026-02-09 05:19:02.879733 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-09 05:19:02.879744 | orchestrator | Monday 09 February 2026 05:18:59 +0000 (0:00:03.571) 0:05:03.147 ******* 2026-02-09 05:19:02.879759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:19:02.879853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:19:02.879874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 05:19:02.879893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 05:19:02.879904 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:19:02.879914 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:19:02.879925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:19:02.879944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-09 05:19:19.747484 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:19.747637 | orchestrator | 2026-02-09 05:19:19.747660 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-09 05:19:19.747677 | orchestrator | Monday 09 February 2026 05:19:02 +0000 (0:00:03.735) 0:05:06.883 ******* 2026-02-09 05:19:19.747719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 05:19:19.747774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 05:19:19.747791 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:19:19.747807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 05:19:19.747822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 05:19:19.747836 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:19:19.747850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 05:19:19.747865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-09 05:19:19.747879 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:19.747892 | orchestrator | 2026-02-09 05:19:19.747906 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-09 05:19:19.747920 | orchestrator | Monday 09 February 2026 05:19:06 +0000 (0:00:04.067) 0:05:10.950 ******* 2026-02-09 05:19:19.747934 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:19:19.747972 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:19:19.747987 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:19:19.748001 | orchestrator | 2026-02-09 05:19:19.748048 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-09 05:19:19.748076 | orchestrator | Monday 09 February 2026 05:19:10 +0000 (0:00:03.128) 0:05:14.079 ******* 2026-02-09 05:19:19.748092 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:19:19.748107 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:19:19.748122 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:19.748136 | orchestrator | 2026-02-09 05:19:19.748152 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-09 05:19:19.748163 | orchestrator | Monday 09 February 2026 05:19:12 +0000 (0:00:02.914) 0:05:16.993 ******* 2026-02-09 05:19:19.748172 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:19:19.748182 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:19:19.748191 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:19.748201 | orchestrator | 2026-02-09 05:19:19.748220 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-09 05:19:19.748231 | orchestrator | Monday 09 February 2026 05:19:14 +0000 (0:00:01.432) 0:05:18.426 ******* 2026-02-09 05:19:19.748241 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:19:19.748251 | orchestrator | 2026-02-09 05:19:19.748261 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-09 05:19:19.748271 | orchestrator | Monday 09 February 2026 05:19:16 +0000 (0:00:02.291) 0:05:20.718 ******* 2026-02-09 05:19:19.748289 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-09 05:19:19.748307 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-09 05:19:19.748322 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-09 05:19:19.748337 | orchestrator | 2026-02-09 05:19:19.748351 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-09 05:19:19.748366 | orchestrator | Monday 09 February 2026 05:19:19 +0000 (0:00:02.500) 0:05:23.219 ******* 2026-02-09 05:19:19.748392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-09 05:19:34.373641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-09 05:19:34.373751 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:19:34.373770 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:19:34.373783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-09 05:19:34.373795 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:34.373806 | orchestrator | 2026-02-09 05:19:34.373819 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-09 05:19:34.373831 | orchestrator | Monday 09 February 2026 05:19:20 +0000 (0:00:01.706) 0:05:24.926 ******* 2026-02-09 05:19:34.373844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-09 05:19:34.373857 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:19:34.373868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-09 05:19:34.373879 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:19:34.373890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-09 05:19:34.373902 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:34.373913 | orchestrator | 2026-02-09 05:19:34.373924 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-09 05:19:34.373935 | orchestrator | Monday 09 February 2026 05:19:22 +0000 (0:00:01.516) 0:05:26.442 ******* 2026-02-09 05:19:34.373966 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:19:34.373977 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:19:34.373988 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:34.373999 | orchestrator | 2026-02-09 05:19:34.374150 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-09 05:19:34.374170 | orchestrator | Monday 09 February 2026 05:19:23 +0000 (0:00:01.487) 0:05:27.930 ******* 2026-02-09 05:19:34.374188 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:19:34.374206 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:19:34.374222 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:34.374238 | orchestrator | 2026-02-09 05:19:34.374254 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-09 05:19:34.374271 | orchestrator | Monday 09 February 2026 05:19:26 +0000 (0:00:02.501) 0:05:30.431 ******* 2026-02-09 05:19:34.374289 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:19:34.374308 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:19:34.374327 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:34.374347 | orchestrator | 2026-02-09 05:19:34.374366 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-09 05:19:34.374387 | orchestrator | Monday 09 February 2026 05:19:27 +0000 (0:00:01.374) 0:05:31.806 ******* 2026-02-09 05:19:34.374401 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:19:34.374412 | orchestrator | 2026-02-09 05:19:34.374422 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-09 05:19:34.374433 | orchestrator | Monday 09 February 2026 05:19:29 +0000 (0:00:02.047) 0:05:33.853 ******* 2026-02-09 05:19:34.374477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:19:34.374494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:34.374509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-09 05:19:34.374533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-09 05:19:34.374555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:34.556391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:34.556491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:34.556510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 05:19:34.556561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:34.556575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:34.556588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-09 05:19:34.556625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:34.556638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:34.556653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 05:19:34.556706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:34.556720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:19:34.556744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:34.672206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-09 05:19:34.672328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-09 05:19:34.672347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:34.672370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:34.672434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:19:34.672459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:34.672481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:34.672516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 05:19:34.672538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-09 05:19:34.672566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:34.672590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-09 05:19:35.951162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:35.951286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:35.951300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-09 05:19:35.951312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:35.951322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:35.951351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:35.951379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:35.951412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 05:19:35.951423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 05:19:35.951434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:35.951448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:35.951457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:35.951479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-09 05:19:37.108995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:37.109196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.109213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 05:19:37.109247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:37.109256 | orchestrator | 2026-02-09 05:19:37.109266 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-09 05:19:37.109299 | orchestrator | Monday 09 February 2026 05:19:35 +0000 (0:00:06.103) 0:05:39.957 ******* 2026-02-09 05:19:37.109328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:19:37.109339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.109349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-09 05:19:37.109363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-09 05:19:37.109378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.109388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:37.109405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:37.179155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 05:19:37.179267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:19:37.179300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:37.179332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.179342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.179370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-09 05:19:37.179380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-09 05:19:37.179394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-09 05:19:37.179412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:37.179420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.179435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.247971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:19:37.248126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:37.248155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 05:19:37.248186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.248195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:37.248224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:37.248234 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:19:37.248244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-09 05:19:37.248265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 05:19:37.248273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-09 05:19:37.248281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:37.248296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.485088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.485248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:37.485262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-09 05:19:37.485269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:37.485275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:37.485281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.485302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-09 05:19:37.485309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:37.485326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 05:19:37.485332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:37.485337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:37.485343 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:19:37.485350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-09 05:19:37.485361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-09 05:19:53.909550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-09 05:19:53.909724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-09 05:19:53.909744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-09 05:19:53.909756 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:53.909769 | orchestrator | 2026-02-09 05:19:53.909779 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-09 05:19:53.909792 | orchestrator | Monday 09 February 2026 05:19:38 +0000 (0:00:02.534) 0:05:42.491 ******* 2026-02-09 05:19:53.909803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:19:53.909817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:19:53.909828 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:19:53.909838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:19:53.909847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:19:53.909857 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:19:53.909866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:19:53.909926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:19:53.909937 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:19:53.909946 | orchestrator | 2026-02-09 05:19:53.909956 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-09 05:19:53.909966 | orchestrator | Monday 09 February 2026 05:19:41 +0000 (0:00:03.083) 0:05:45.575 ******* 2026-02-09 05:19:53.909974 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:19:53.909985 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:19:53.910082 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:19:53.910094 | orchestrator | 2026-02-09 05:19:53.910104 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-09 05:19:53.910115 | orchestrator | Monday 09 February 2026 05:19:43 +0000 (0:00:02.235) 0:05:47.810 ******* 2026-02-09 05:19:53.910124 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:19:53.910132 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:19:53.910141 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:19:53.910150 | orchestrator | 2026-02-09 05:19:53.910158 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-09 05:19:53.910166 | orchestrator | Monday 09 February 2026 05:19:46 +0000 (0:00:02.976) 0:05:50.787 ******* 2026-02-09 05:19:53.910175 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:19:53.910184 | orchestrator | 2026-02-09 05:19:53.910203 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-09 05:19:53.910211 | orchestrator | Monday 09 February 2026 05:19:49 +0000 (0:00:02.370) 0:05:53.157 ******* 2026-02-09 05:19:53.910222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-09 05:19:53.910232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-09 05:19:53.910264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-09 05:20:10.393513 | orchestrator | 2026-02-09 05:20:10.393647 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-09 05:20:10.393677 | orchestrator | Monday 09 February 2026 05:19:53 +0000 (0:00:04.752) 0:05:57.910 ******* 2026-02-09 05:20:10.393724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-09 05:20:10.393753 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:20:10.393777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-09 05:20:10.393797 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:20:10.393816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-09 05:20:10.393869 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:20:10.393891 | orchestrator | 2026-02-09 05:20:10.393911 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-09 05:20:10.393930 | orchestrator | Monday 09 February 2026 05:19:55 +0000 (0:00:01.609) 0:05:59.519 ******* 2026-02-09 05:20:10.393953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:20:10.394118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:20:10.394154 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:20:10.394177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:20:10.394200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:20:10.394221 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:20:10.394254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:20:10.394277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:20:10.394293 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:20:10.394305 | orchestrator | 2026-02-09 05:20:10.394319 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-09 05:20:10.394332 | orchestrator | Monday 09 February 2026 05:19:57 +0000 (0:00:01.875) 0:06:01.395 ******* 2026-02-09 05:20:10.394345 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:20:10.394359 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:20:10.394372 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:20:10.394386 | orchestrator | 2026-02-09 05:20:10.394399 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-09 05:20:10.394409 | orchestrator | Monday 09 February 2026 05:19:59 +0000 (0:00:02.243) 0:06:03.638 ******* 2026-02-09 05:20:10.394420 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:20:10.394431 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:20:10.394442 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:20:10.394452 | orchestrator | 2026-02-09 05:20:10.394463 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-09 05:20:10.394487 | orchestrator | Monday 09 February 2026 05:20:02 +0000 (0:00:02.963) 0:06:06.602 ******* 2026-02-09 05:20:10.394498 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:20:10.394509 | orchestrator | 2026-02-09 05:20:10.394520 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-09 05:20:10.394531 | orchestrator | Monday 09 February 2026 05:20:04 +0000 (0:00:02.407) 0:06:09.010 ******* 2026-02-09 05:20:10.394545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:20:10.394571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:20:11.609468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:20:11.609578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:20:11.609618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:20:11.609632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:20:11.609663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:20:11.609682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:20:11.609695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:20:11.609714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:20:11.609726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:20:11.609738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:20:11.609749 | orchestrator | 2026-02-09 05:20:11.609762 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-09 05:20:11.609783 | orchestrator | Monday 09 February 2026 05:20:11 +0000 (0:00:06.600) 0:06:15.611 ******* 2026-02-09 05:20:12.368601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:20:12.368687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:20:12.368695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:20:12.368701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:20:12.368706 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:20:12.368721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:20:12.368729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:20:12.368737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:20:12.368742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:20:12.368746 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:20:12.368750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:20:12.368763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:20:34.463783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-09 05:20:34.463901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-09 05:20:34.463919 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:20:34.463938 | orchestrator | 2026-02-09 05:20:34.463959 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-09 05:20:34.464058 | orchestrator | Monday 09 February 2026 05:20:13 +0000 (0:00:01.989) 0:06:17.600 ******* 2026-02-09 05:20:34.464082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464155 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:20:34.464165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464237 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:20:34.464264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:20:34.464337 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:20:34.464350 | orchestrator | 2026-02-09 05:20:34.464369 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-09 05:20:34.464388 | orchestrator | Monday 09 February 2026 05:20:16 +0000 (0:00:03.011) 0:06:20.611 ******* 2026-02-09 05:20:34.464407 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:20:34.464426 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:20:34.464444 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:20:34.464463 | orchestrator | 2026-02-09 05:20:34.464481 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-09 05:20:34.464500 | orchestrator | Monday 09 February 2026 05:20:18 +0000 (0:00:02.354) 0:06:22.966 ******* 2026-02-09 05:20:34.464514 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:20:34.464526 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:20:34.464538 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:20:34.464551 | orchestrator | 2026-02-09 05:20:34.464563 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-09 05:20:34.464576 | orchestrator | Monday 09 February 2026 05:20:21 +0000 (0:00:03.040) 0:06:26.006 ******* 2026-02-09 05:20:34.464589 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:20:34.464602 | orchestrator | 2026-02-09 05:20:34.464614 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-09 05:20:34.464627 | orchestrator | Monday 09 February 2026 05:20:24 +0000 (0:00:02.721) 0:06:28.728 ******* 2026-02-09 05:20:34.464640 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-09 05:20:34.464654 | orchestrator | 2026-02-09 05:20:34.464665 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-09 05:20:34.464675 | orchestrator | Monday 09 February 2026 05:20:26 +0000 (0:00:01.690) 0:06:30.419 ******* 2026-02-09 05:20:34.464688 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-09 05:20:34.464702 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-09 05:20:34.464723 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-09 05:20:34.464738 | orchestrator | 2026-02-09 05:20:34.464758 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-09 05:20:34.464776 | orchestrator | Monday 09 February 2026 05:20:32 +0000 (0:00:05.637) 0:06:36.057 ******* 2026-02-09 05:20:34.464804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 05:20:34.464837 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:20:57.565429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 05:20:57.565551 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:20:57.565570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 05:20:57.565583 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:20:57.565595 | orchestrator | 2026-02-09 05:20:57.565607 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-09 05:20:57.565619 | orchestrator | Monday 09 February 2026 05:20:34 +0000 (0:00:02.409) 0:06:38.466 ******* 2026-02-09 05:20:57.565638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 05:20:57.565686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 05:20:57.565706 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:20:57.565718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 05:20:57.565729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 05:20:57.565766 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:20:57.565777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 05:20:57.565788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-09 05:20:57.565799 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:20:57.565810 | orchestrator | 2026-02-09 05:20:57.565821 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-09 05:20:57.565831 | orchestrator | Monday 09 February 2026 05:20:36 +0000 (0:00:02.531) 0:06:40.998 ******* 2026-02-09 05:20:57.565842 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:20:57.565853 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:20:57.565863 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:20:57.565874 | orchestrator | 2026-02-09 05:20:57.565885 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-09 05:20:57.565895 | orchestrator | Monday 09 February 2026 05:20:40 +0000 (0:00:03.822) 0:06:44.821 ******* 2026-02-09 05:20:57.565906 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:20:57.565916 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:20:57.565929 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:20:57.565942 | orchestrator | 2026-02-09 05:20:57.565954 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-09 05:20:57.565993 | orchestrator | Monday 09 February 2026 05:20:44 +0000 (0:00:04.061) 0:06:48.882 ******* 2026-02-09 05:20:57.566010 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-09 05:20:57.566082 | orchestrator | 2026-02-09 05:20:57.566111 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-09 05:20:57.566126 | orchestrator | Monday 09 February 2026 05:20:46 +0000 (0:00:01.660) 0:06:50.543 ******* 2026-02-09 05:20:57.566160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 05:20:57.566175 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:20:57.566188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 05:20:57.566202 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:20:57.566213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 05:20:57.566233 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:20:57.566244 | orchestrator | 2026-02-09 05:20:57.566255 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-09 05:20:57.566267 | orchestrator | Monday 09 February 2026 05:20:48 +0000 (0:00:02.427) 0:06:52.971 ******* 2026-02-09 05:20:57.566278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 05:20:57.566289 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:20:57.566300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 05:20:57.566311 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:20:57.566322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-09 05:20:57.566333 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:20:57.566344 | orchestrator | 2026-02-09 05:20:57.566355 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-09 05:20:57.566366 | orchestrator | Monday 09 February 2026 05:20:51 +0000 (0:00:02.634) 0:06:55.605 ******* 2026-02-09 05:20:57.566377 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:20:57.566387 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:20:57.566398 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:20:57.566409 | orchestrator | 2026-02-09 05:20:57.566425 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-09 05:20:57.566436 | orchestrator | Monday 09 February 2026 05:20:53 +0000 (0:00:02.356) 0:06:57.962 ******* 2026-02-09 05:20:57.566447 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:20:57.566458 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:20:57.566469 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:20:57.566479 | orchestrator | 2026-02-09 05:20:57.566490 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-09 05:20:57.566501 | orchestrator | Monday 09 February 2026 05:20:57 +0000 (0:00:03.605) 0:07:01.568 ******* 2026-02-09 05:21:25.790549 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:21:25.790669 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:21:25.790701 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:21:25.790723 | orchestrator | 2026-02-09 05:21:25.790743 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-09 05:21:25.790762 | orchestrator | Monday 09 February 2026 05:21:01 +0000 (0:00:04.061) 0:07:05.629 ******* 2026-02-09 05:21:25.790779 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-09 05:21:25.790833 | orchestrator | 2026-02-09 05:21:25.790854 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-09 05:21:25.790874 | orchestrator | Monday 09 February 2026 05:21:03 +0000 (0:00:02.380) 0:07:08.010 ******* 2026-02-09 05:21:25.790898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 05:21:25.790920 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:21:25.790938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 05:21:25.790949 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:21:25.790991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 05:21:25.791003 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:21:25.791014 | orchestrator | 2026-02-09 05:21:25.791026 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-09 05:21:25.791038 | orchestrator | Monday 09 February 2026 05:21:06 +0000 (0:00:02.448) 0:07:10.458 ******* 2026-02-09 05:21:25.791049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 05:21:25.791060 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:21:25.791076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 05:21:25.791088 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:21:25.791135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-09 05:21:25.791157 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:21:25.791168 | orchestrator | 2026-02-09 05:21:25.791179 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-09 05:21:25.791190 | orchestrator | Monday 09 February 2026 05:21:08 +0000 (0:00:02.439) 0:07:12.898 ******* 2026-02-09 05:21:25.791201 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:21:25.791212 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:21:25.791223 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:21:25.791233 | orchestrator | 2026-02-09 05:21:25.791244 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-09 05:21:25.791255 | orchestrator | Monday 09 February 2026 05:21:11 +0000 (0:00:02.568) 0:07:15.466 ******* 2026-02-09 05:21:25.791265 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:21:25.791276 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:21:25.791287 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:21:25.791297 | orchestrator | 2026-02-09 05:21:25.791308 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-09 05:21:25.791319 | orchestrator | Monday 09 February 2026 05:21:15 +0000 (0:00:03.576) 0:07:19.043 ******* 2026-02-09 05:21:25.791329 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:21:25.791340 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:21:25.791351 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:21:25.791361 | orchestrator | 2026-02-09 05:21:25.791372 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-09 05:21:25.791383 | orchestrator | Monday 09 February 2026 05:21:19 +0000 (0:00:04.352) 0:07:23.395 ******* 2026-02-09 05:21:25.791393 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:21:25.791404 | orchestrator | 2026-02-09 05:21:25.791415 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-09 05:21:25.791426 | orchestrator | Monday 09 February 2026 05:21:21 +0000 (0:00:02.551) 0:07:25.947 ******* 2026-02-09 05:21:25.791438 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 05:21:25.791452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 05:21:25.791465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 05:21:25.791497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 05:21:27.888109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:21:27.888245 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 05:21:27.888275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 05:21:27.888298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 05:21:27.888319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 05:21:27.888392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:21:27.888441 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-09 05:21:27.888463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 05:21:27.888482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 05:21:27.888501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 05:21:27.888535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:21:27.888558 | orchestrator | 2026-02-09 05:21:27.888579 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-09 05:21:27.888600 | orchestrator | Monday 09 February 2026 05:21:26 +0000 (0:00:05.004) 0:07:30.952 ******* 2026-02-09 05:21:27.888644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 05:21:29.012943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 05:21:29.013116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 05:21:29.013145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 05:21:29.013169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:21:29.013217 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:21:29.013261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 05:21:29.013285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 05:21:29.013328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 05:21:29.013350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 05:21:29.013363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:21:29.013383 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:21:29.013395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-09 05:21:29.013407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-09 05:21:29.013427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-09 05:21:46.878702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-09 05:21:46.878854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-09 05:21:46.878872 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:21:46.878887 | orchestrator | 2026-02-09 05:21:46.878900 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-09 05:21:46.878913 | orchestrator | Monday 09 February 2026 05:21:29 +0000 (0:00:02.067) 0:07:33.019 ******* 2026-02-09 05:21:46.878992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 05:21:46.879009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 05:21:46.879023 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:21:46.879035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 05:21:46.879046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 05:21:46.879057 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:21:46.879068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 05:21:46.879080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-09 05:21:46.879090 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:21:46.879101 | orchestrator | 2026-02-09 05:21:46.879113 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-09 05:21:46.879123 | orchestrator | Monday 09 February 2026 05:21:31 +0000 (0:00:02.361) 0:07:35.381 ******* 2026-02-09 05:21:46.879134 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:21:46.879149 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:21:46.879162 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:21:46.879181 | orchestrator | 2026-02-09 05:21:46.879194 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-09 05:21:46.879208 | orchestrator | Monday 09 February 2026 05:21:33 +0000 (0:00:02.209) 0:07:37.591 ******* 2026-02-09 05:21:46.879221 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:21:46.879234 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:21:46.879247 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:21:46.879259 | orchestrator | 2026-02-09 05:21:46.879272 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-09 05:21:46.879285 | orchestrator | Monday 09 February 2026 05:21:36 +0000 (0:00:03.139) 0:07:40.730 ******* 2026-02-09 05:21:46.879298 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:21:46.879312 | orchestrator | 2026-02-09 05:21:46.879324 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-09 05:21:46.879337 | orchestrator | Monday 09 February 2026 05:21:39 +0000 (0:00:02.598) 0:07:43.329 ******* 2026-02-09 05:21:46.879373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:21:46.879401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:21:46.879414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:21:46.879436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:21:46.879462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:21:51.035255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:21:51.035433 | orchestrator | 2026-02-09 05:21:51.035460 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-09 05:21:51.035480 | orchestrator | Monday 09 February 2026 05:21:46 +0000 (0:00:07.545) 0:07:50.874 ******* 2026-02-09 05:21:51.035500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:21:51.035548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:21:51.035569 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:21:51.035617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:21:51.035670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:21:51.035689 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:21:51.035721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:21:51.035744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:21:51.035777 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:21:51.035803 | orchestrator | 2026-02-09 05:21:51.035827 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-09 05:21:51.035850 | orchestrator | Monday 09 February 2026 05:21:49 +0000 (0:00:02.305) 0:07:53.180 ******* 2026-02-09 05:21:51.035877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:21:51.035914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-09 05:22:00.050600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-09 05:22:00.050721 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:00.050743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:22:00.050758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-09 05:22:00.050771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-09 05:22:00.050783 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:00.050794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:22:00.050805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-09 05:22:00.050816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-09 05:22:00.050827 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:00.050838 | orchestrator | 2026-02-09 05:22:00.050868 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-09 05:22:00.050881 | orchestrator | Monday 09 February 2026 05:21:51 +0000 (0:00:01.860) 0:07:55.040 ******* 2026-02-09 05:22:00.050893 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:00.050904 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:00.050914 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:00.050925 | orchestrator | 2026-02-09 05:22:00.050936 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-09 05:22:00.051018 | orchestrator | Monday 09 February 2026 05:21:52 +0000 (0:00:01.494) 0:07:56.534 ******* 2026-02-09 05:22:00.051031 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:00.051042 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:00.051053 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:00.051064 | orchestrator | 2026-02-09 05:22:00.051074 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-09 05:22:00.051085 | orchestrator | Monday 09 February 2026 05:21:54 +0000 (0:00:02.351) 0:07:58.886 ******* 2026-02-09 05:22:00.051096 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:22:00.051107 | orchestrator | 2026-02-09 05:22:00.051118 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-09 05:22:00.051129 | orchestrator | Monday 09 February 2026 05:21:57 +0000 (0:00:02.636) 0:08:01.522 ******* 2026-02-09 05:22:00.051165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-09 05:22:00.051182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 05:22:00.051194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:00.051207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:00.051226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 05:22:00.051249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-09 05:22:00.051261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 05:22:00.051282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-09 05:22:02.114145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:02.114246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 05:22:02.114300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:02.114311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:02.114322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 05:22:02.114334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:02.114344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 05:22:02.114374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:22:02.114400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-09 05:22:02.114411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:02.114421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:02.114432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 05:22:02.114450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:22:04.311492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-09 05:22:04.311622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:04.311642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:22:04.311656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:04.311668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-09 05:22:04.311700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 05:22:04.311728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:04.311740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:04.311752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 05:22:04.311764 | orchestrator | 2026-02-09 05:22:04.311776 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-09 05:22:04.311789 | orchestrator | Monday 09 February 2026 05:22:03 +0000 (0:00:05.822) 0:08:07.345 ******* 2026-02-09 05:22:04.311801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-09 05:22:04.311814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 05:22:04.311833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:04.491699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:04.491813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 05:22:04.491831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:22:04.491846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-09 05:22:04.491859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:04.491912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:04.491926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 05:22:04.491938 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:04.492026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-09 05:22:04.492049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 05:22:04.492062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:04.492073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:04.492085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 05:22:04.492120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:22:05.736440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-09 05:22:05.736549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:05.736570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-09 05:22:05.736608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:05.736620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-09 05:22:05.736665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 05:22:05.736679 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:05.736692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:05.736704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:05.736716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-09 05:22:05.736729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:22:05.736750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-09 05:22:05.736774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:18.126257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:22:18.126357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-09 05:22:18.126367 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:18.126376 | orchestrator | 2026-02-09 05:22:18.126382 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-09 05:22:18.126391 | orchestrator | Monday 09 February 2026 05:22:05 +0000 (0:00:02.398) 0:08:09.743 ******* 2026-02-09 05:22:18.126398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-09 05:22:18.126408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-09 05:22:18.126438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:22:18.126445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:22:18.126452 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:18.126458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-09 05:22:18.126464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-09 05:22:18.126482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:22:18.126501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:22:18.126508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-09 05:22:18.126514 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:18.126519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-09 05:22:18.126526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:22:18.126531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-09 05:22:18.126543 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:18.126549 | orchestrator | 2026-02-09 05:22:18.126555 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-09 05:22:18.126561 | orchestrator | Monday 09 February 2026 05:22:07 +0000 (0:00:01.852) 0:08:11.596 ******* 2026-02-09 05:22:18.126567 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:18.126573 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:18.126579 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:18.126584 | orchestrator | 2026-02-09 05:22:18.126590 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-09 05:22:18.126596 | orchestrator | Monday 09 February 2026 05:22:09 +0000 (0:00:02.007) 0:08:13.603 ******* 2026-02-09 05:22:18.126601 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:18.126607 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:18.126613 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:18.126618 | orchestrator | 2026-02-09 05:22:18.126624 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-09 05:22:18.126630 | orchestrator | Monday 09 February 2026 05:22:11 +0000 (0:00:02.358) 0:08:15.962 ******* 2026-02-09 05:22:18.126636 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:22:18.126641 | orchestrator | 2026-02-09 05:22:18.126647 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-09 05:22:18.126652 | orchestrator | Monday 09 February 2026 05:22:14 +0000 (0:00:02.333) 0:08:18.296 ******* 2026-02-09 05:22:18.126660 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:22:18.126678 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:22:36.079577 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:22:36.079688 | orchestrator | 2026-02-09 05:22:36.079699 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-09 05:22:36.079707 | orchestrator | Monday 09 February 2026 05:22:18 +0000 (0:00:03.827) 0:08:22.124 ******* 2026-02-09 05:22:36.079716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:22:36.079723 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:36.079731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:22:36.079738 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:36.079768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:22:36.079782 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:36.079789 | orchestrator | 2026-02-09 05:22:36.079795 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-09 05:22:36.079802 | orchestrator | Monday 09 February 2026 05:22:19 +0000 (0:00:01.497) 0:08:23.622 ******* 2026-02-09 05:22:36.079809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-09 05:22:36.079817 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:36.079823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-09 05:22:36.079829 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:36.079835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-09 05:22:36.079842 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:36.079848 | orchestrator | 2026-02-09 05:22:36.079854 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-09 05:22:36.079860 | orchestrator | Monday 09 February 2026 05:22:21 +0000 (0:00:01.549) 0:08:25.171 ******* 2026-02-09 05:22:36.079866 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:36.079872 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:36.079878 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:36.079885 | orchestrator | 2026-02-09 05:22:36.079891 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-09 05:22:36.079897 | orchestrator | Monday 09 February 2026 05:22:23 +0000 (0:00:01.949) 0:08:27.121 ******* 2026-02-09 05:22:36.079903 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:36.079909 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:36.079915 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:36.079921 | orchestrator | 2026-02-09 05:22:36.079928 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-09 05:22:36.079934 | orchestrator | Monday 09 February 2026 05:22:25 +0000 (0:00:02.332) 0:08:29.453 ******* 2026-02-09 05:22:36.079963 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:22:36.079969 | orchestrator | 2026-02-09 05:22:36.079975 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-09 05:22:36.079981 | orchestrator | Monday 09 February 2026 05:22:27 +0000 (0:00:02.425) 0:08:31.879 ******* 2026-02-09 05:22:36.079988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-09 05:22:36.079999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-09 05:22:36.080022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-09 05:22:37.809650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-09 05:22:37.809787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-09 05:22:37.809828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-09 05:22:37.809869 | orchestrator | 2026-02-09 05:22:37.809882 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-09 05:22:37.809895 | orchestrator | Monday 09 February 2026 05:22:36 +0000 (0:00:08.203) 0:08:40.083 ******* 2026-02-09 05:22:37.809930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-09 05:22:37.810085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-09 05:22:37.810100 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:37.810120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-09 05:22:37.810145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-09 05:22:37.810160 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:37.810186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-09 05:22:59.833331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-09 05:22:59.833466 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:59.833489 | orchestrator | 2026-02-09 05:22:59.833507 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-09 05:22:59.833524 | orchestrator | Monday 09 February 2026 05:22:37 +0000 (0:00:01.729) 0:08:41.813 ******* 2026-02-09 05:22:59.833573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-09 05:22:59.833594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-09 05:22:59.833631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:22:59.833648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:22:59.833665 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:59.833681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-09 05:22:59.833698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-09 05:22:59.833714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:22:59.833732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:22:59.833748 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:59.833764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-09 05:22:59.833782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-09 05:22:59.833820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:22:59.833837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-09 05:22:59.833855 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:59.833871 | orchestrator | 2026-02-09 05:22:59.833889 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-09 05:22:59.833907 | orchestrator | Monday 09 February 2026 05:22:40 +0000 (0:00:02.206) 0:08:44.019 ******* 2026-02-09 05:22:59.833960 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:22:59.833979 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:22:59.833995 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:22:59.834012 | orchestrator | 2026-02-09 05:22:59.834094 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-09 05:22:59.834112 | orchestrator | Monday 09 February 2026 05:22:42 +0000 (0:00:02.333) 0:08:46.352 ******* 2026-02-09 05:22:59.834128 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:22:59.834145 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:22:59.834162 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:22:59.834179 | orchestrator | 2026-02-09 05:22:59.834212 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-09 05:22:59.834241 | orchestrator | Monday 09 February 2026 05:22:45 +0000 (0:00:03.086) 0:08:49.439 ******* 2026-02-09 05:22:59.834256 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:59.834272 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:59.834287 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:59.834302 | orchestrator | 2026-02-09 05:22:59.834318 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-09 05:22:59.834334 | orchestrator | Monday 09 February 2026 05:22:46 +0000 (0:00:01.424) 0:08:50.864 ******* 2026-02-09 05:22:59.834350 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:59.834365 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:59.834380 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:59.834395 | orchestrator | 2026-02-09 05:22:59.834411 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-09 05:22:59.834427 | orchestrator | Monday 09 February 2026 05:22:48 +0000 (0:00:01.458) 0:08:52.322 ******* 2026-02-09 05:22:59.834443 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:59.834458 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:59.834474 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:59.834489 | orchestrator | 2026-02-09 05:22:59.834505 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-09 05:22:59.834520 | orchestrator | Monday 09 February 2026 05:22:50 +0000 (0:00:01.758) 0:08:54.080 ******* 2026-02-09 05:22:59.834535 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:59.834550 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:59.834565 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:59.834580 | orchestrator | 2026-02-09 05:22:59.834595 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-09 05:22:59.834611 | orchestrator | Monday 09 February 2026 05:22:51 +0000 (0:00:01.392) 0:08:55.473 ******* 2026-02-09 05:22:59.834627 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:22:59.834642 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:22:59.834657 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:22:59.834674 | orchestrator | 2026-02-09 05:22:59.834690 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-09 05:22:59.834746 | orchestrator | Monday 09 February 2026 05:22:52 +0000 (0:00:01.430) 0:08:56.904 ******* 2026-02-09 05:22:59.834763 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:22:59.834780 | orchestrator | 2026-02-09 05:22:59.834794 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-09 05:22:59.834810 | orchestrator | Monday 09 February 2026 05:22:55 +0000 (0:00:02.783) 0:08:59.688 ******* 2026-02-09 05:22:59.834828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-09 05:22:59.834870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-09 05:23:04.244349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-09 05:23:04.244455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:23:04.244492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:23:04.244514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-09 05:23:04.244527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:23:04.244563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:23:04.244594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-09 05:23:04.244607 | orchestrator | 2026-02-09 05:23:04.244620 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-09 05:23:04.244632 | orchestrator | Monday 09 February 2026 05:22:59 +0000 (0:00:04.148) 0:09:03.836 ******* 2026-02-09 05:23:04.244644 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:23:04.244656 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:23:04.244667 | orchestrator | } 2026-02-09 05:23:04.244678 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:23:04.244689 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:23:04.244700 | orchestrator | } 2026-02-09 05:23:04.244711 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:23:04.244721 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:23:04.244732 | orchestrator | } 2026-02-09 05:23:04.244743 | orchestrator | 2026-02-09 05:23:04.244754 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:23:04.244765 | orchestrator | Monday 09 February 2026 05:23:01 +0000 (0:00:01.484) 0:09:05.321 ******* 2026-02-09 05:23:04.244776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-09 05:23:04.244794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:23:04.244806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:23:04.244825 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:23:04.244836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-09 05:23:04.244848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:23:04.244868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:25:05.623812 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:25:05.624003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-09 05:25:05.624043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-09 05:25:05.624057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-09 05:25:05.624069 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:25:05.624102 | orchestrator | 2026-02-09 05:25:05.624115 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-09 05:25:05.624127 | orchestrator | Monday 09 February 2026 05:23:04 +0000 (0:00:02.920) 0:09:08.242 ******* 2026-02-09 05:25:05.624138 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:25:05.624150 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:25:05.624161 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:25:05.624171 | orchestrator | 2026-02-09 05:25:05.624182 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-09 05:25:05.624193 | orchestrator | Monday 09 February 2026 05:23:06 +0000 (0:00:01.782) 0:09:10.025 ******* 2026-02-09 05:25:05.624205 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:25:05.624216 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:25:05.624226 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:25:05.624238 | orchestrator | 2026-02-09 05:25:05.624258 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-09 05:25:05.624277 | orchestrator | Monday 09 February 2026 05:23:07 +0000 (0:00:01.408) 0:09:11.433 ******* 2026-02-09 05:25:05.624297 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:25:05.624317 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:25:05.624336 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:25:05.624357 | orchestrator | 2026-02-09 05:25:05.624375 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-09 05:25:05.624395 | orchestrator | Monday 09 February 2026 05:23:14 +0000 (0:00:07.037) 0:09:18.471 ******* 2026-02-09 05:25:05.624413 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:25:05.624432 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:25:05.624452 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:25:05.624470 | orchestrator | 2026-02-09 05:25:05.624487 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-09 05:25:05.624501 | orchestrator | Monday 09 February 2026 05:23:21 +0000 (0:00:07.493) 0:09:25.964 ******* 2026-02-09 05:25:05.624514 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:25:05.624528 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:25:05.624547 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:25:05.624566 | orchestrator | 2026-02-09 05:25:05.624586 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-09 05:25:05.624606 | orchestrator | Monday 09 February 2026 05:23:29 +0000 (0:00:07.115) 0:09:33.079 ******* 2026-02-09 05:25:05.624625 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:25:05.624643 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:25:05.624654 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:25:05.624666 | orchestrator | 2026-02-09 05:25:05.624685 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-09 05:25:05.624702 | orchestrator | Monday 09 February 2026 05:23:36 +0000 (0:00:07.894) 0:09:40.974 ******* 2026-02-09 05:25:05.624719 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:25:05.624736 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:25:05.624755 | orchestrator | 2026-02-09 05:25:05.624774 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-09 05:25:05.624792 | orchestrator | Monday 09 February 2026 05:23:40 +0000 (0:00:03.694) 0:09:44.668 ******* 2026-02-09 05:25:05.624811 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:25:05.624822 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:25:05.624833 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:25:05.624844 | orchestrator | 2026-02-09 05:25:05.624874 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-09 05:25:05.624886 | orchestrator | Monday 09 February 2026 05:23:54 +0000 (0:00:13.648) 0:09:58.317 ******* 2026-02-09 05:25:05.624897 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:25:05.624939 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:25:05.624952 | orchestrator | 2026-02-09 05:25:05.624963 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-09 05:25:05.624974 | orchestrator | Monday 09 February 2026 05:23:57 +0000 (0:00:03.682) 0:10:01.999 ******* 2026-02-09 05:25:05.624997 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:25:05.625008 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:25:05.625019 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:25:05.625029 | orchestrator | 2026-02-09 05:25:05.625040 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-09 05:25:05.625050 | orchestrator | Monday 09 February 2026 05:24:05 +0000 (0:00:07.359) 0:10:09.359 ******* 2026-02-09 05:25:05.625064 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:25:05.625083 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:25:05.625102 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:25:05.625120 | orchestrator | 2026-02-09 05:25:05.625137 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-09 05:25:05.625149 | orchestrator | Monday 09 February 2026 05:24:12 +0000 (0:00:06.858) 0:10:16.218 ******* 2026-02-09 05:25:05.625160 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:25:05.625170 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:25:05.625181 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:25:05.625192 | orchestrator | 2026-02-09 05:25:05.625202 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-09 05:25:05.625213 | orchestrator | Monday 09 February 2026 05:24:19 +0000 (0:00:06.901) 0:10:23.119 ******* 2026-02-09 05:25:05.625224 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:25:05.625234 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:25:05.625245 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:25:05.625256 | orchestrator | 2026-02-09 05:25:05.625274 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-09 05:25:05.625285 | orchestrator | Monday 09 February 2026 05:24:25 +0000 (0:00:06.818) 0:10:29.938 ******* 2026-02-09 05:25:05.625296 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:25:05.625307 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:25:05.625317 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:25:05.625328 | orchestrator | 2026-02-09 05:25:05.625338 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-09 05:25:05.625349 | orchestrator | Monday 09 February 2026 05:24:33 +0000 (0:00:07.295) 0:10:37.233 ******* 2026-02-09 05:25:05.625360 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:25:05.625371 | orchestrator | 2026-02-09 05:25:05.625381 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-09 05:25:05.625392 | orchestrator | Monday 09 February 2026 05:24:36 +0000 (0:00:03.682) 0:10:40.915 ******* 2026-02-09 05:25:05.625403 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:25:05.625414 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:25:05.625424 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:25:05.625435 | orchestrator | 2026-02-09 05:25:05.625445 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-09 05:25:05.625456 | orchestrator | Monday 09 February 2026 05:24:49 +0000 (0:00:13.081) 0:10:53.997 ******* 2026-02-09 05:25:05.625467 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:25:05.625477 | orchestrator | 2026-02-09 05:25:05.625488 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-09 05:25:05.625499 | orchestrator | Monday 09 February 2026 05:24:53 +0000 (0:00:03.795) 0:10:57.792 ******* 2026-02-09 05:25:05.625510 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:25:05.625520 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:25:05.625531 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:25:05.625541 | orchestrator | 2026-02-09 05:25:05.625552 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-09 05:25:05.625563 | orchestrator | Monday 09 February 2026 05:25:00 +0000 (0:00:06.967) 0:11:04.759 ******* 2026-02-09 05:25:05.625573 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:25:05.625584 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:25:05.625595 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:25:05.625606 | orchestrator | 2026-02-09 05:25:05.625616 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-09 05:25:05.625634 | orchestrator | Monday 09 February 2026 05:25:02 +0000 (0:00:02.057) 0:11:06.817 ******* 2026-02-09 05:25:05.625645 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:25:05.625656 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:25:05.625666 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:25:05.625677 | orchestrator | 2026-02-09 05:25:05.625688 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:25:05.625700 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-09 05:25:05.625713 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-09 05:25:05.625724 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-09 05:25:05.625734 | orchestrator | 2026-02-09 05:25:05.625745 | orchestrator | 2026-02-09 05:25:05.625756 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:25:05.625767 | orchestrator | Monday 09 February 2026 05:25:05 +0000 (0:00:02.802) 0:11:09.619 ******* 2026-02-09 05:25:05.625777 | orchestrator | =============================================================================== 2026-02-09 05:25:05.625788 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.65s 2026-02-09 05:25:05.625799 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 13.08s 2026-02-09 05:25:05.625810 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.20s 2026-02-09 05:25:05.625828 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.90s 2026-02-09 05:25:06.588103 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.55s 2026-02-09 05:25:06.588208 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.49s 2026-02-09 05:25:06.588224 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.36s 2026-02-09 05:25:06.588236 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.30s 2026-02-09 05:25:06.588247 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.12s 2026-02-09 05:25:06.588258 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.04s 2026-02-09 05:25:06.588268 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.97s 2026-02-09 05:25:06.588279 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.90s 2026-02-09 05:25:06.588289 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.86s 2026-02-09 05:25:06.588300 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.82s 2026-02-09 05:25:06.588310 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.60s 2026-02-09 05:25:06.588321 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.10s 2026-02-09 05:25:06.588331 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.82s 2026-02-09 05:25:06.588341 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.82s 2026-02-09 05:25:06.588352 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.64s 2026-02-09 05:25:06.588363 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 5.38s 2026-02-09 05:25:06.914991 | orchestrator | + osism apply -a upgrade opensearch 2026-02-09 05:25:09.002143 | orchestrator | 2026-02-09 05:25:09 | INFO  | Task 33e9b14e-7a1a-4a2d-96d3-798f57b516d0 (opensearch) was prepared for execution. 2026-02-09 05:25:09.002232 | orchestrator | 2026-02-09 05:25:09 | INFO  | It takes a moment until task 33e9b14e-7a1a-4a2d-96d3-798f57b516d0 (opensearch) has been started and output is visible here. 2026-02-09 05:25:20.154215 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-09 05:25:20.154344 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-09 05:25:20.154368 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-09 05:25:20.154377 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-09 05:25:20.154395 | orchestrator | 2026-02-09 05:25:20.154405 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 05:25:20.154413 | orchestrator | 2026-02-09 05:25:20.154426 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 05:25:20.154440 | orchestrator | Monday 09 February 2026 05:25:14 +0000 (0:00:01.146) 0:00:01.146 ******* 2026-02-09 05:25:20.154454 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:25:20.154469 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:25:20.154483 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:25:20.154495 | orchestrator | 2026-02-09 05:25:20.154504 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 05:25:20.154513 | orchestrator | Monday 09 February 2026 05:25:15 +0000 (0:00:00.876) 0:00:02.023 ******* 2026-02-09 05:25:20.154522 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-09 05:25:20.154531 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-09 05:25:20.154539 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-09 05:25:20.154548 | orchestrator | 2026-02-09 05:25:20.154556 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-09 05:25:20.154565 | orchestrator | 2026-02-09 05:25:20.154573 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-09 05:25:20.154582 | orchestrator | Monday 09 February 2026 05:25:16 +0000 (0:00:00.888) 0:00:02.912 ******* 2026-02-09 05:25:20.154590 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:25:20.154599 | orchestrator | 2026-02-09 05:25:20.154607 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-09 05:25:20.154616 | orchestrator | Monday 09 February 2026 05:25:17 +0000 (0:00:01.136) 0:00:04.049 ******* 2026-02-09 05:25:20.154624 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-09 05:25:20.154633 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-09 05:25:20.154642 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-09 05:25:20.154650 | orchestrator | 2026-02-09 05:25:20.154659 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-09 05:25:20.154668 | orchestrator | Monday 09 February 2026 05:25:18 +0000 (0:00:01.319) 0:00:05.368 ******* 2026-02-09 05:25:20.154680 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:20.154707 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:20.154742 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:20.154754 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:20.154767 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:20.154796 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:24.662384 | orchestrator | 2026-02-09 05:25:24.662509 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-09 05:25:24.662522 | orchestrator | Monday 09 February 2026 05:25:20 +0000 (0:00:01.381) 0:00:06.750 ******* 2026-02-09 05:25:24.662530 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:25:24.662538 | orchestrator | 2026-02-09 05:25:24.662546 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-09 05:25:24.662553 | orchestrator | Monday 09 February 2026 05:25:21 +0000 (0:00:00.982) 0:00:07.733 ******* 2026-02-09 05:25:24.662563 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:24.662574 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:24.662582 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:24.662654 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:24.662665 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:24.662673 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:24.662686 | orchestrator | 2026-02-09 05:25:24.662693 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-09 05:25:24.662700 | orchestrator | Monday 09 February 2026 05:25:23 +0000 (0:00:02.672) 0:00:10.405 ******* 2026-02-09 05:25:24.662712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:25:24.662727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:25:25.725503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:25:25.725637 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:25:25.725649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:25:25.725685 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:25:25.725709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:25:25.725736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:25:25.725743 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:25:25.725749 | orchestrator | 2026-02-09 05:25:25.725756 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-09 05:25:25.725764 | orchestrator | Monday 09 February 2026 05:25:24 +0000 (0:00:00.866) 0:00:11.272 ******* 2026-02-09 05:25:25.725770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:25:25.725782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:25:25.725792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:25:25.725798 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:25:25.725811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:25:28.494453 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:25:28.494592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:25:28.494650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:25:28.494665 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:25:28.494677 | orchestrator | 2026-02-09 05:25:28.494689 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-09 05:25:28.494722 | orchestrator | Monday 09 February 2026 05:25:25 +0000 (0:00:01.058) 0:00:12.330 ******* 2026-02-09 05:25:28.494734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:28.494767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:28.494779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:28.494800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:28.494819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:28.494842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:37.415190 | orchestrator | 2026-02-09 05:25:37.415330 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-09 05:25:37.415351 | orchestrator | Monday 09 February 2026 05:25:28 +0000 (0:00:02.766) 0:00:15.096 ******* 2026-02-09 05:25:37.415363 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:25:37.415375 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:25:37.415386 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:25:37.415397 | orchestrator | 2026-02-09 05:25:37.415409 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-09 05:25:37.415420 | orchestrator | Monday 09 February 2026 05:25:30 +0000 (0:00:02.482) 0:00:17.579 ******* 2026-02-09 05:25:37.415430 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:25:37.415441 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:25:37.415452 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:25:37.415463 | orchestrator | 2026-02-09 05:25:37.415473 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-09 05:25:37.415484 | orchestrator | Monday 09 February 2026 05:25:32 +0000 (0:00:01.997) 0:00:19.576 ******* 2026-02-09 05:25:37.415498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:37.415530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:37.415543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-09 05:25:37.415604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:37.415623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:37.415644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-09 05:25:37.415658 | orchestrator | 2026-02-09 05:25:37.415671 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-09 05:25:37.415684 | orchestrator | Monday 09 February 2026 05:25:35 +0000 (0:00:02.778) 0:00:22.355 ******* 2026-02-09 05:25:37.415698 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:25:37.415711 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:25:37.415733 | orchestrator | } 2026-02-09 05:25:37.415746 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:25:37.415758 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:25:37.415771 | orchestrator | } 2026-02-09 05:25:37.415784 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:25:37.415796 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:25:37.415808 | orchestrator | } 2026-02-09 05:25:37.415821 | orchestrator | 2026-02-09 05:25:37.415834 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:25:37.415847 | orchestrator | Monday 09 February 2026 05:25:36 +0000 (0:00:00.385) 0:00:22.740 ******* 2026-02-09 05:25:37.415869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:28:35.196047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:28:35.196169 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:28:35.196206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:28:35.196221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:28:35.196259 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:28:35.196291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-09 05:28:35.196305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-09 05:28:35.196317 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:28:35.196328 | orchestrator | 2026-02-09 05:28:35.196340 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-09 05:28:35.196362 | orchestrator | Monday 09 February 2026 05:25:37 +0000 (0:00:01.285) 0:00:24.026 ******* 2026-02-09 05:28:35.196386 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:28:35.196414 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-09 05:28:35.196433 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-09 05:28:35.196470 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:28:35.196487 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:28:35.196506 | orchestrator | 2026-02-09 05:28:35.196525 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-09 05:28:35.196563 | orchestrator | Monday 09 February 2026 05:25:37 +0000 (0:00:00.560) 0:00:24.587 ******* 2026-02-09 05:28:35.196582 | orchestrator | 2026-02-09 05:28:35.196601 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-09 05:28:35.196620 | orchestrator | Monday 09 February 2026 05:25:38 +0000 (0:00:00.113) 0:00:24.700 ******* 2026-02-09 05:28:35.196641 | orchestrator | 2026-02-09 05:28:35.196660 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-09 05:28:35.196679 | orchestrator | Monday 09 February 2026 05:25:38 +0000 (0:00:00.104) 0:00:24.805 ******* 2026-02-09 05:28:35.196695 | orchestrator | 2026-02-09 05:28:35.196706 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-09 05:28:35.196717 | orchestrator | Monday 09 February 2026 05:25:38 +0000 (0:00:00.084) 0:00:24.889 ******* 2026-02-09 05:28:35.196728 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:28:35.196739 | orchestrator | 2026-02-09 05:28:35.196750 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-09 05:28:35.196760 | orchestrator | Monday 09 February 2026 05:25:40 +0000 (0:00:02.498) 0:00:27.388 ******* 2026-02-09 05:28:35.196771 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:28:35.196782 | orchestrator | 2026-02-09 05:28:35.196792 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-09 05:28:35.196803 | orchestrator | Monday 09 February 2026 05:25:45 +0000 (0:00:05.199) 0:00:32.587 ******* 2026-02-09 05:28:35.196813 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:28:35.196824 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:28:35.196835 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:28:35.196845 | orchestrator | 2026-02-09 05:28:35.196856 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-09 05:28:35.196867 | orchestrator | Monday 09 February 2026 05:26:56 +0000 (0:01:10.926) 0:01:43.514 ******* 2026-02-09 05:28:35.196877 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:28:35.196916 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:28:35.196929 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:28:35.196940 | orchestrator | 2026-02-09 05:28:35.196950 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-09 05:28:35.196961 | orchestrator | Monday 09 February 2026 05:28:29 +0000 (0:01:32.851) 0:03:16.365 ******* 2026-02-09 05:28:35.196972 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:28:35.196983 | orchestrator | 2026-02-09 05:28:35.196993 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-09 05:28:35.197004 | orchestrator | Monday 09 February 2026 05:28:30 +0000 (0:00:00.969) 0:03:17.335 ******* 2026-02-09 05:28:35.197014 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:28:35.197025 | orchestrator | 2026-02-09 05:28:35.197036 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-09 05:28:35.197046 | orchestrator | Monday 09 February 2026 05:28:32 +0000 (0:00:02.236) 0:03:19.571 ******* 2026-02-09 05:28:35.197057 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:28:35.197067 | orchestrator | 2026-02-09 05:28:35.197089 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-09 05:28:37.435768 | orchestrator | Monday 09 February 2026 05:28:35 +0000 (0:00:02.225) 0:03:21.797 ******* 2026-02-09 05:28:37.435853 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:28:37.435866 | orchestrator | 2026-02-09 05:28:37.435874 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-09 05:28:37.435883 | orchestrator | Monday 09 February 2026 05:28:35 +0000 (0:00:00.253) 0:03:22.050 ******* 2026-02-09 05:28:37.435961 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:28:37.435971 | orchestrator | 2026-02-09 05:28:37.435978 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:28:37.435987 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:28:37.436021 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 05:28:37.436029 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 05:28:37.436036 | orchestrator | 2026-02-09 05:28:37.436044 | orchestrator | 2026-02-09 05:28:37.436051 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:28:37.436059 | orchestrator | Monday 09 February 2026 05:28:37 +0000 (0:00:01.620) 0:03:23.671 ******* 2026-02-09 05:28:37.436071 | orchestrator | =============================================================================== 2026-02-09 05:28:37.436083 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 92.85s 2026-02-09 05:28:37.436095 | orchestrator | opensearch : Restart opensearch container ------------------------------ 70.93s 2026-02-09 05:28:37.436107 | orchestrator | opensearch : Perform a flush -------------------------------------------- 5.20s 2026-02-09 05:28:37.436118 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 2.78s 2026-02-09 05:28:37.436130 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.77s 2026-02-09 05:28:37.436159 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.67s 2026-02-09 05:28:37.436172 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 2.50s 2026-02-09 05:28:37.436183 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.48s 2026-02-09 05:28:37.436195 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.24s 2026-02-09 05:28:37.436206 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.23s 2026-02-09 05:28:37.436218 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.00s 2026-02-09 05:28:37.436229 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.62s 2026-02-09 05:28:37.436242 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.38s 2026-02-09 05:28:37.436254 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.32s 2026-02-09 05:28:37.436266 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.29s 2026-02-09 05:28:37.436278 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.14s 2026-02-09 05:28:37.436291 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.06s 2026-02-09 05:28:37.436319 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.98s 2026-02-09 05:28:37.436344 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.97s 2026-02-09 05:28:37.436358 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.89s 2026-02-09 05:28:37.768045 | orchestrator | + osism apply -a upgrade memcached 2026-02-09 05:28:39.854860 | orchestrator | 2026-02-09 05:28:39 | INFO  | Task 1a0c25d4-fe01-495f-bdab-f91c33b08c2c (memcached) was prepared for execution. 2026-02-09 05:28:39.855000 | orchestrator | 2026-02-09 05:28:39 | INFO  | It takes a moment until task 1a0c25d4-fe01-495f-bdab-f91c33b08c2c (memcached) has been started and output is visible here. 2026-02-09 05:29:14.173275 | orchestrator | 2026-02-09 05:29:14.173429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 05:29:14.173446 | orchestrator | 2026-02-09 05:29:14.173458 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 05:29:14.173469 | orchestrator | Monday 09 February 2026 05:28:45 +0000 (0:00:01.532) 0:00:01.532 ******* 2026-02-09 05:29:14.173481 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:29:14.173493 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:29:14.173505 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:29:14.173517 | orchestrator | 2026-02-09 05:29:14.173557 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 05:29:14.173569 | orchestrator | Monday 09 February 2026 05:28:47 +0000 (0:00:01.971) 0:00:03.504 ******* 2026-02-09 05:29:14.173581 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-09 05:29:14.173593 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-09 05:29:14.173604 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-09 05:29:14.173615 | orchestrator | 2026-02-09 05:29:14.173626 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-09 05:29:14.173637 | orchestrator | 2026-02-09 05:29:14.173648 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-09 05:29:14.173659 | orchestrator | Monday 09 February 2026 05:28:50 +0000 (0:00:02.446) 0:00:05.951 ******* 2026-02-09 05:29:14.173670 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:29:14.173682 | orchestrator | 2026-02-09 05:29:14.173692 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-09 05:29:14.173703 | orchestrator | Monday 09 February 2026 05:28:52 +0000 (0:00:02.744) 0:00:08.696 ******* 2026-02-09 05:29:14.173714 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-09 05:29:14.173725 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-09 05:29:14.173736 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-09 05:29:14.173747 | orchestrator | 2026-02-09 05:29:14.173758 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-09 05:29:14.173768 | orchestrator | Monday 09 February 2026 05:28:54 +0000 (0:00:01.984) 0:00:10.680 ******* 2026-02-09 05:29:14.173779 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-09 05:29:14.173793 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-09 05:29:14.173807 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-09 05:29:14.173819 | orchestrator | 2026-02-09 05:29:14.173832 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-09 05:29:14.173845 | orchestrator | Monday 09 February 2026 05:28:57 +0000 (0:00:02.756) 0:00:13.436 ******* 2026-02-09 05:29:14.173880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-09 05:29:14.173924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-09 05:29:14.173960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-09 05:29:14.173982 | orchestrator | 2026-02-09 05:29:14.173993 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-09 05:29:14.174004 | orchestrator | Monday 09 February 2026 05:28:59 +0000 (0:00:02.176) 0:00:15.613 ******* 2026-02-09 05:29:14.174087 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:29:14.174103 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:29:14.174114 | orchestrator | } 2026-02-09 05:29:14.174125 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:29:14.174136 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:29:14.174147 | orchestrator | } 2026-02-09 05:29:14.174157 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:29:14.174168 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:29:14.174178 | orchestrator | } 2026-02-09 05:29:14.174189 | orchestrator | 2026-02-09 05:29:14.174200 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:29:14.174211 | orchestrator | Monday 09 February 2026 05:29:01 +0000 (0:00:01.429) 0:00:17.042 ******* 2026-02-09 05:29:14.174222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-09 05:29:14.174234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-09 05:29:14.174246 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:29:14.174257 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:29:14.174275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-09 05:29:14.174296 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:29:14.174308 | orchestrator | 2026-02-09 05:29:14.174319 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-09 05:29:14.174330 | orchestrator | Monday 09 February 2026 05:29:03 +0000 (0:00:02.031) 0:00:19.073 ******* 2026-02-09 05:29:14.174341 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:29:14.174351 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:29:14.174362 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:29:14.174373 | orchestrator | 2026-02-09 05:29:14.174383 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:29:14.174396 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 05:29:14.174408 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 05:29:14.174419 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 05:29:14.174430 | orchestrator | 2026-02-09 05:29:14.174441 | orchestrator | 2026-02-09 05:29:14.174452 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:29:14.174471 | orchestrator | Monday 09 February 2026 05:29:14 +0000 (0:00:10.810) 0:00:29.884 ******* 2026-02-09 05:29:14.555013 | orchestrator | =============================================================================== 2026-02-09 05:29:14.555127 | orchestrator | memcached : Restart memcached container -------------------------------- 10.81s 2026-02-09 05:29:14.555139 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.76s 2026-02-09 05:29:14.555149 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.74s 2026-02-09 05:29:14.555159 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.45s 2026-02-09 05:29:14.555169 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.18s 2026-02-09 05:29:14.555183 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.03s 2026-02-09 05:29:14.555199 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.98s 2026-02-09 05:29:14.555214 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.97s 2026-02-09 05:29:14.555231 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.43s 2026-02-09 05:29:14.892662 | orchestrator | + osism apply -a upgrade redis 2026-02-09 05:29:17.023837 | orchestrator | 2026-02-09 05:29:17 | INFO  | Task d2cd59cd-8731-4da9-9636-fe1f82761533 (redis) was prepared for execution. 2026-02-09 05:29:17.023971 | orchestrator | 2026-02-09 05:29:17 | INFO  | It takes a moment until task d2cd59cd-8731-4da9-9636-fe1f82761533 (redis) has been started and output is visible here. 2026-02-09 05:29:34.928535 | orchestrator | 2026-02-09 05:29:34.928669 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 05:29:34.928683 | orchestrator | 2026-02-09 05:29:34.928693 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 05:29:34.928702 | orchestrator | Monday 09 February 2026 05:29:23 +0000 (0:00:01.725) 0:00:01.725 ******* 2026-02-09 05:29:34.928711 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:29:34.928721 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:29:34.928729 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:29:34.928736 | orchestrator | 2026-02-09 05:29:34.928744 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 05:29:34.928753 | orchestrator | Monday 09 February 2026 05:29:24 +0000 (0:00:01.778) 0:00:03.504 ******* 2026-02-09 05:29:34.928761 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-09 05:29:34.928770 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-09 05:29:34.928810 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-09 05:29:34.928818 | orchestrator | 2026-02-09 05:29:34.928826 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-09 05:29:34.928834 | orchestrator | 2026-02-09 05:29:34.928842 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-09 05:29:34.928850 | orchestrator | Monday 09 February 2026 05:29:27 +0000 (0:00:02.865) 0:00:06.370 ******* 2026-02-09 05:29:34.928858 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:29:34.928867 | orchestrator | 2026-02-09 05:29:34.928875 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-09 05:29:34.928906 | orchestrator | Monday 09 February 2026 05:29:29 +0000 (0:00:01.959) 0:00:08.330 ******* 2026-02-09 05:29:34.928937 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:34.928953 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:34.928964 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:34.928980 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:29:34.929020 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:29:34.929046 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:29:34.929060 | orchestrator | 2026-02-09 05:29:34.929074 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-09 05:29:34.929088 | orchestrator | Monday 09 February 2026 05:29:31 +0000 (0:00:02.183) 0:00:10.513 ******* 2026-02-09 05:29:34.929110 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:34.929125 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:34.929140 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:34.929151 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:29:34.929167 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767326 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767487 | orchestrator | 2026-02-09 05:29:41.767518 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-09 05:29:41.767541 | orchestrator | Monday 09 February 2026 05:29:34 +0000 (0:00:03.093) 0:00:13.607 ******* 2026-02-09 05:29:41.767591 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767616 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767635 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767655 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767674 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767757 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767780 | orchestrator | 2026-02-09 05:29:41.767800 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-09 05:29:41.767821 | orchestrator | Monday 09 February 2026 05:29:38 +0000 (0:00:03.802) 0:00:17.409 ******* 2026-02-09 05:29:41.767843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:29:41.767968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-09 05:30:09.460872 | orchestrator | 2026-02-09 05:30:09.461056 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-09 05:30:09.461076 | orchestrator | Monday 09 February 2026 05:29:41 +0000 (0:00:03.045) 0:00:20.454 ******* 2026-02-09 05:30:09.461089 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:30:09.461101 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:30:09.461112 | orchestrator | } 2026-02-09 05:30:09.461123 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:30:09.461134 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:30:09.461145 | orchestrator | } 2026-02-09 05:30:09.461156 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:30:09.461167 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:30:09.461177 | orchestrator | } 2026-02-09 05:30:09.461190 | orchestrator | 2026-02-09 05:30:09.461210 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:30:09.461230 | orchestrator | Monday 09 February 2026 05:29:43 +0000 (0:00:01.642) 0:00:22.097 ******* 2026-02-09 05:30:09.461253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-09 05:30:09.461277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-09 05:30:09.461299 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:30:09.461344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-09 05:30:09.461387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-09 05:30:09.461399 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:30:09.461413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-09 05:30:09.461485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-09 05:30:09.461509 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:30:09.461526 | orchestrator | 2026-02-09 05:30:09.461540 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-09 05:30:09.461558 | orchestrator | Monday 09 February 2026 05:29:45 +0000 (0:00:01.981) 0:00:24.079 ******* 2026-02-09 05:30:09.461577 | orchestrator | 2026-02-09 05:30:09.461596 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-09 05:30:09.461615 | orchestrator | Monday 09 February 2026 05:29:45 +0000 (0:00:00.476) 0:00:24.556 ******* 2026-02-09 05:30:09.461637 | orchestrator | 2026-02-09 05:30:09.461656 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-09 05:30:09.461672 | orchestrator | Monday 09 February 2026 05:29:46 +0000 (0:00:00.487) 0:00:25.043 ******* 2026-02-09 05:30:09.461686 | orchestrator | 2026-02-09 05:30:09.461699 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-09 05:30:09.461711 | orchestrator | Monday 09 February 2026 05:29:47 +0000 (0:00:00.784) 0:00:25.828 ******* 2026-02-09 05:30:09.461724 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:30:09.461737 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:30:09.461751 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:30:09.461766 | orchestrator | 2026-02-09 05:30:09.461786 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-09 05:30:09.461804 | orchestrator | Monday 09 February 2026 05:29:57 +0000 (0:00:10.734) 0:00:36.562 ******* 2026-02-09 05:30:09.461815 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:30:09.461825 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:30:09.461854 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:30:09.461873 | orchestrator | 2026-02-09 05:30:09.461917 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:30:09.461937 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 05:30:09.461958 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 05:30:09.461979 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 05:30:09.462012 | orchestrator | 2026-02-09 05:30:09.462090 | orchestrator | 2026-02-09 05:30:09.462103 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:30:09.462113 | orchestrator | Monday 09 February 2026 05:30:08 +0000 (0:00:11.100) 0:00:47.663 ******* 2026-02-09 05:30:09.462124 | orchestrator | =============================================================================== 2026-02-09 05:30:09.462135 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.10s 2026-02-09 05:30:09.462145 | orchestrator | redis : Restart redis container ---------------------------------------- 10.73s 2026-02-09 05:30:09.462156 | orchestrator | redis : Copying over redis config files --------------------------------- 3.80s 2026-02-09 05:30:09.462167 | orchestrator | redis : Copying over default config.json files -------------------------- 3.09s 2026-02-09 05:30:09.462177 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.05s 2026-02-09 05:30:09.462188 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.87s 2026-02-09 05:30:09.462199 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.18s 2026-02-09 05:30:09.462209 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.98s 2026-02-09 05:30:09.462220 | orchestrator | redis : include_tasks --------------------------------------------------- 1.96s 2026-02-09 05:30:09.462231 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.78s 2026-02-09 05:30:09.462241 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.75s 2026-02-09 05:30:09.462252 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.64s 2026-02-09 05:30:09.807130 | orchestrator | + osism apply -a upgrade mariadb 2026-02-09 05:30:11.930675 | orchestrator | 2026-02-09 05:30:11 | INFO  | Task 6a952b5a-0c2a-4401-9124-a02f17c285e7 (mariadb) was prepared for execution. 2026-02-09 05:30:11.930831 | orchestrator | 2026-02-09 05:30:11 | INFO  | It takes a moment until task 6a952b5a-0c2a-4401-9124-a02f17c285e7 (mariadb) has been started and output is visible here. 2026-02-09 05:30:37.220913 | orchestrator | 2026-02-09 05:30:37.221017 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 05:30:37.221024 | orchestrator | 2026-02-09 05:30:37.221029 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 05:30:37.221034 | orchestrator | Monday 09 February 2026 05:30:17 +0000 (0:00:01.426) 0:00:01.426 ******* 2026-02-09 05:30:37.221038 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:30:37.221044 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:30:37.221048 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:30:37.221051 | orchestrator | 2026-02-09 05:30:37.221055 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 05:30:37.221059 | orchestrator | Monday 09 February 2026 05:30:19 +0000 (0:00:01.829) 0:00:03.255 ******* 2026-02-09 05:30:37.221063 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-09 05:30:37.221068 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-09 05:30:37.221072 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-09 05:30:37.221076 | orchestrator | 2026-02-09 05:30:37.221096 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-09 05:30:37.221100 | orchestrator | 2026-02-09 05:30:37.221104 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-09 05:30:37.221107 | orchestrator | Monday 09 February 2026 05:30:21 +0000 (0:00:01.780) 0:00:05.035 ******* 2026-02-09 05:30:37.221111 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:30:37.221115 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-09 05:30:37.221124 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-09 05:30:37.221128 | orchestrator | 2026-02-09 05:30:37.221132 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-09 05:30:37.221135 | orchestrator | Monday 09 February 2026 05:30:22 +0000 (0:00:01.449) 0:00:06.485 ******* 2026-02-09 05:30:37.221140 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:30:37.221145 | orchestrator | 2026-02-09 05:30:37.221149 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-09 05:30:37.221152 | orchestrator | Monday 09 February 2026 05:30:24 +0000 (0:00:01.922) 0:00:08.407 ******* 2026-02-09 05:30:37.221162 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 05:30:37.221183 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 05:30:37.221191 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 05:30:37.221196 | orchestrator | 2026-02-09 05:30:37.221200 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-09 05:30:37.221204 | orchestrator | Monday 09 February 2026 05:30:28 +0000 (0:00:03.940) 0:00:12.348 ******* 2026-02-09 05:30:37.221208 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:30:37.221212 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:30:37.221216 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:30:37.221220 | orchestrator | 2026-02-09 05:30:37.221223 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-09 05:30:37.221227 | orchestrator | Monday 09 February 2026 05:30:30 +0000 (0:00:01.629) 0:00:13.977 ******* 2026-02-09 05:30:37.221231 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:30:37.221234 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:30:37.221238 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:30:37.221242 | orchestrator | 2026-02-09 05:30:37.221245 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-09 05:30:37.221249 | orchestrator | Monday 09 February 2026 05:30:32 +0000 (0:00:02.162) 0:00:16.140 ******* 2026-02-09 05:30:37.221259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 05:30:49.635943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 05:30:49.636065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 05:30:49.636119 | orchestrator | 2026-02-09 05:30:49.636141 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-09 05:30:49.636177 | orchestrator | Monday 09 February 2026 05:30:37 +0000 (0:00:04.573) 0:00:20.714 ******* 2026-02-09 05:30:49.636196 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:30:49.636215 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:30:49.636233 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:30:49.636252 | orchestrator | 2026-02-09 05:30:49.636270 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-09 05:30:49.636312 | orchestrator | Monday 09 February 2026 05:30:39 +0000 (0:00:02.020) 0:00:22.735 ******* 2026-02-09 05:30:49.636333 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:30:49.636345 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:30:49.636356 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:30:49.636366 | orchestrator | 2026-02-09 05:30:49.636377 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-09 05:30:49.636388 | orchestrator | Monday 09 February 2026 05:30:44 +0000 (0:00:04.836) 0:00:27.571 ******* 2026-02-09 05:30:49.636400 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:30:49.636411 | orchestrator | 2026-02-09 05:30:49.636423 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-09 05:30:49.636436 | orchestrator | Monday 09 February 2026 05:30:46 +0000 (0:00:02.045) 0:00:29.617 ******* 2026-02-09 05:30:49.636452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:30:49.636482 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:30:49.636511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:30:57.613103 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:30:57.613277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:30:57.613347 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:30:57.613370 | orchestrator | 2026-02-09 05:30:57.613388 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-09 05:30:57.613406 | orchestrator | Monday 09 February 2026 05:30:49 +0000 (0:00:03.513) 0:00:33.130 ******* 2026-02-09 05:30:57.613449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:30:57.613472 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:30:57.613517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:30:57.613552 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:30:57.613578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:30:57.613596 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:30:57.613612 | orchestrator | 2026-02-09 05:30:57.613630 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-09 05:30:57.613647 | orchestrator | Monday 09 February 2026 05:30:53 +0000 (0:00:03.450) 0:00:36.581 ******* 2026-02-09 05:30:57.613679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:02.045618 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:02.045778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:02.045801 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:02.045837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:02.045875 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:02.045921 | orchestrator | 2026-02-09 05:31:02.045935 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-09 05:31:02.045948 | orchestrator | Monday 09 February 2026 05:30:57 +0000 (0:00:04.522) 0:00:41.104 ******* 2026-02-09 05:31:02.045985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 05:31:02.046006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 05:31:02.046131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-09 05:31:17.479960 | orchestrator | 2026-02-09 05:31:17.480072 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-09 05:31:17.480088 | orchestrator | Monday 09 February 2026 05:31:02 +0000 (0:00:04.435) 0:00:45.539 ******* 2026-02-09 05:31:17.480100 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:31:17.480113 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:31:17.480124 | orchestrator | } 2026-02-09 05:31:17.480136 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:31:17.480147 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:31:17.480158 | orchestrator | } 2026-02-09 05:31:17.480169 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:31:17.480179 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:31:17.480190 | orchestrator | } 2026-02-09 05:31:17.480201 | orchestrator | 2026-02-09 05:31:17.480212 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:31:17.480223 | orchestrator | Monday 09 February 2026 05:31:03 +0000 (0:00:01.478) 0:00:47.018 ******* 2026-02-09 05:31:17.480257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:17.480297 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:17.480330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:17.480344 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:17.480361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:17.480382 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:17.480393 | orchestrator | 2026-02-09 05:31:17.480404 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-09 05:31:17.480415 | orchestrator | Monday 09 February 2026 05:31:07 +0000 (0:00:04.198) 0:00:51.216 ******* 2026-02-09 05:31:17.480426 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:17.480437 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:17.480448 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:17.480460 | orchestrator | 2026-02-09 05:31:17.480473 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-09 05:31:17.480485 | orchestrator | Monday 09 February 2026 05:31:09 +0000 (0:00:01.383) 0:00:52.600 ******* 2026-02-09 05:31:17.480498 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:17.480511 | orchestrator | 2026-02-09 05:31:17.480524 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-09 05:31:17.480537 | orchestrator | Monday 09 February 2026 05:31:10 +0000 (0:00:01.129) 0:00:53.729 ******* 2026-02-09 05:31:17.480549 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:17.480562 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:17.480575 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:17.480587 | orchestrator | 2026-02-09 05:31:17.480600 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-09 05:31:17.480612 | orchestrator | Monday 09 February 2026 05:31:11 +0000 (0:00:01.421) 0:00:55.151 ******* 2026-02-09 05:31:17.480625 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:17.480638 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:17.480651 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:17.480663 | orchestrator | 2026-02-09 05:31:17.480676 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-09 05:31:17.480688 | orchestrator | Monday 09 February 2026 05:31:13 +0000 (0:00:01.671) 0:00:56.823 ******* 2026-02-09 05:31:17.480701 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:17.480714 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:17.480725 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:17.480736 | orchestrator | 2026-02-09 05:31:17.480746 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-09 05:31:17.480757 | orchestrator | Monday 09 February 2026 05:31:14 +0000 (0:00:01.428) 0:00:58.252 ******* 2026-02-09 05:31:17.480768 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:17.480779 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:17.480789 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:17.480800 | orchestrator | 2026-02-09 05:31:17.480810 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-09 05:31:17.480821 | orchestrator | Monday 09 February 2026 05:31:16 +0000 (0:00:01.401) 0:00:59.653 ******* 2026-02-09 05:31:17.480832 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:17.480842 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:17.480853 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:17.480863 | orchestrator | 2026-02-09 05:31:17.481040 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-09 05:31:35.670928 | orchestrator | Monday 09 February 2026 05:31:17 +0000 (0:00:01.317) 0:01:00.970 ******* 2026-02-09 05:31:35.671028 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:35.671057 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:35.671063 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:35.671068 | orchestrator | 2026-02-09 05:31:35.671074 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-09 05:31:35.671080 | orchestrator | Monday 09 February 2026 05:31:19 +0000 (0:00:01.607) 0:01:02.578 ******* 2026-02-09 05:31:35.671085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 05:31:35.671091 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 05:31:35.671096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 05:31:35.671101 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:35.671106 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-09 05:31:35.671111 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-09 05:31:35.671127 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-09 05:31:35.671132 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:35.671137 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-09 05:31:35.671142 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-09 05:31:35.671147 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-09 05:31:35.671152 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:35.671157 | orchestrator | 2026-02-09 05:31:35.671163 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-09 05:31:35.671168 | orchestrator | Monday 09 February 2026 05:31:20 +0000 (0:00:01.434) 0:01:04.013 ******* 2026-02-09 05:31:35.671173 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:35.671178 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:35.671183 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:35.671188 | orchestrator | 2026-02-09 05:31:35.671193 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-09 05:31:35.671198 | orchestrator | Monday 09 February 2026 05:31:21 +0000 (0:00:01.438) 0:01:05.451 ******* 2026-02-09 05:31:35.671206 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:35.671214 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:35.671223 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:35.671231 | orchestrator | 2026-02-09 05:31:35.671240 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-09 05:31:35.671247 | orchestrator | Monday 09 February 2026 05:31:23 +0000 (0:00:01.466) 0:01:06.918 ******* 2026-02-09 05:31:35.671254 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:35.671262 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:35.671271 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:35.671280 | orchestrator | 2026-02-09 05:31:35.671289 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-09 05:31:35.671297 | orchestrator | Monday 09 February 2026 05:31:24 +0000 (0:00:01.372) 0:01:08.291 ******* 2026-02-09 05:31:35.671302 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:35.671307 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:35.671312 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:35.671317 | orchestrator | 2026-02-09 05:31:35.671322 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-09 05:31:35.671327 | orchestrator | Monday 09 February 2026 05:31:26 +0000 (0:00:01.458) 0:01:09.749 ******* 2026-02-09 05:31:35.671332 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:35.671337 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:35.671342 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:35.671347 | orchestrator | 2026-02-09 05:31:35.671352 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-09 05:31:35.671358 | orchestrator | Monday 09 February 2026 05:31:27 +0000 (0:00:01.342) 0:01:11.092 ******* 2026-02-09 05:31:35.671363 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:35.671368 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:35.671373 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:35.671384 | orchestrator | 2026-02-09 05:31:35.671389 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-09 05:31:35.671394 | orchestrator | Monday 09 February 2026 05:31:29 +0000 (0:00:01.750) 0:01:12.842 ******* 2026-02-09 05:31:35.671399 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:35.671404 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:35.671409 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:35.671414 | orchestrator | 2026-02-09 05:31:35.671419 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-09 05:31:35.671424 | orchestrator | Monday 09 February 2026 05:31:30 +0000 (0:00:01.392) 0:01:14.234 ******* 2026-02-09 05:31:35.671430 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:35.671435 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:35.671440 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:35.671445 | orchestrator | 2026-02-09 05:31:35.671450 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-09 05:31:35.671455 | orchestrator | Monday 09 February 2026 05:31:32 +0000 (0:00:01.409) 0:01:15.644 ******* 2026-02-09 05:31:35.671486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:35.671496 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:35.671503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:35.671514 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:35.671530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:52.771371 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:52.771556 | orchestrator | 2026-02-09 05:31:52.771586 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-09 05:31:52.771608 | orchestrator | Monday 09 February 2026 05:31:35 +0000 (0:00:03.519) 0:01:19.163 ******* 2026-02-09 05:31:52.771626 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:52.771644 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:52.771662 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:52.771679 | orchestrator | 2026-02-09 05:31:52.771698 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-09 05:31:52.771717 | orchestrator | Monday 09 February 2026 05:31:37 +0000 (0:00:01.661) 0:01:20.825 ******* 2026-02-09 05:31:52.771743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:52.771809 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:52.771907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:52.771934 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:52.771956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-09 05:31:52.771990 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:52.772010 | orchestrator | 2026-02-09 05:31:52.772030 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-09 05:31:52.772050 | orchestrator | Monday 09 February 2026 05:31:40 +0000 (0:00:03.450) 0:01:24.275 ******* 2026-02-09 05:31:52.772069 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:52.772088 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:52.772107 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:52.772127 | orchestrator | 2026-02-09 05:31:52.772146 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-09 05:31:52.772166 | orchestrator | Monday 09 February 2026 05:31:42 +0000 (0:00:01.720) 0:01:25.996 ******* 2026-02-09 05:31:52.772186 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:52.772206 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:52.772224 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:52.772242 | orchestrator | 2026-02-09 05:31:52.772261 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-09 05:31:52.772280 | orchestrator | Monday 09 February 2026 05:31:43 +0000 (0:00:01.471) 0:01:27.467 ******* 2026-02-09 05:31:52.772299 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:52.772317 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:52.772336 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:52.772354 | orchestrator | 2026-02-09 05:31:52.772373 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-09 05:31:52.772391 | orchestrator | Monday 09 February 2026 05:31:45 +0000 (0:00:01.466) 0:01:28.934 ******* 2026-02-09 05:31:52.772410 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:52.772428 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:52.772446 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:52.772464 | orchestrator | 2026-02-09 05:31:52.772482 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-09 05:31:52.772501 | orchestrator | Monday 09 February 2026 05:31:47 +0000 (0:00:01.770) 0:01:30.705 ******* 2026-02-09 05:31:52.772519 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:31:52.772537 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:31:52.772555 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:31:52.772573 | orchestrator | 2026-02-09 05:31:52.772591 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-09 05:31:52.772610 | orchestrator | Monday 09 February 2026 05:31:49 +0000 (0:00:01.974) 0:01:32.679 ******* 2026-02-09 05:31:52.772628 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:31:52.772647 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:31:52.772666 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:31:52.772684 | orchestrator | 2026-02-09 05:31:52.772703 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-09 05:31:52.772722 | orchestrator | Monday 09 February 2026 05:31:51 +0000 (0:00:01.979) 0:01:34.658 ******* 2026-02-09 05:31:52.772751 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:31:52.772770 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:31:52.772788 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:31:52.772806 | orchestrator | 2026-02-09 05:31:52.772830 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-09 05:31:52.772849 | orchestrator | Monday 09 February 2026 05:31:52 +0000 (0:00:01.386) 0:01:36.044 ******* 2026-02-09 05:31:52.772923 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.527760 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:34:32.527886 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:34:32.527970 | orchestrator | 2026-02-09 05:34:32.527984 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-09 05:34:32.527997 | orchestrator | Monday 09 February 2026 05:31:53 +0000 (0:00:01.395) 0:01:37.440 ******* 2026-02-09 05:34:32.528009 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:34:32.528020 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.528031 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:34:32.528043 | orchestrator | 2026-02-09 05:34:32.528055 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-09 05:34:32.528067 | orchestrator | Monday 09 February 2026 05:31:56 +0000 (0:00:02.174) 0:01:39.614 ******* 2026-02-09 05:34:32.528080 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.528092 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:34:32.528104 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:34:32.528115 | orchestrator | 2026-02-09 05:34:32.528127 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-09 05:34:32.528138 | orchestrator | Monday 09 February 2026 05:31:57 +0000 (0:00:01.354) 0:01:40.969 ******* 2026-02-09 05:34:32.528151 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:34:32.528164 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:34:32.528174 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:34:32.528186 | orchestrator | 2026-02-09 05:34:32.528197 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-09 05:34:32.528208 | orchestrator | Monday 09 February 2026 05:31:58 +0000 (0:00:01.466) 0:01:42.435 ******* 2026-02-09 05:34:32.528219 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:34:32.528230 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.528241 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:34:32.528251 | orchestrator | 2026-02-09 05:34:32.528262 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-09 05:34:32.528274 | orchestrator | Monday 09 February 2026 05:32:02 +0000 (0:00:03.634) 0:01:46.070 ******* 2026-02-09 05:34:32.528285 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.528297 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:34:32.528309 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:34:32.528320 | orchestrator | 2026-02-09 05:34:32.528332 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-09 05:34:32.528344 | orchestrator | Monday 09 February 2026 05:32:04 +0000 (0:00:01.500) 0:01:47.570 ******* 2026-02-09 05:34:32.528355 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.528367 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:34:32.528378 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:34:32.528390 | orchestrator | 2026-02-09 05:34:32.528402 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-09 05:34:32.528415 | orchestrator | Monday 09 February 2026 05:32:05 +0000 (0:00:01.427) 0:01:48.997 ******* 2026-02-09 05:34:32.528427 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:34:32.528439 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:34:32.528450 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:34:32.528462 | orchestrator | 2026-02-09 05:34:32.528473 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-09 05:34:32.528485 | orchestrator | Monday 09 February 2026 05:32:07 +0000 (0:00:01.778) 0:01:50.775 ******* 2026-02-09 05:34:32.528497 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:34:32.528537 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:34:32.528549 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:34:32.528585 | orchestrator | 2026-02-09 05:34:32.528610 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-09 05:34:32.528635 | orchestrator | Monday 09 February 2026 05:32:08 +0000 (0:00:01.585) 0:01:52.361 ******* 2026-02-09 05:34:32.528646 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:34:32.528658 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:34:32.528669 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:34:32.528680 | orchestrator | 2026-02-09 05:34:32.528691 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-09 05:34:32.528703 | orchestrator | Monday 09 February 2026 05:32:10 +0000 (0:00:01.592) 0:01:53.953 ******* 2026-02-09 05:34:32.528715 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:34:32.528727 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:34:32.528738 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:34:32.528749 | orchestrator | 2026-02-09 05:34:32.528759 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-09 05:34:32.528769 | orchestrator | Monday 09 February 2026 05:32:12 +0000 (0:00:01.718) 0:01:55.671 ******* 2026-02-09 05:34:32.528779 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:34:32.528790 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:34:32.528800 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:34:32.528811 | orchestrator | 2026-02-09 05:34:32.528822 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-09 05:34:32.528833 | orchestrator | 2026-02-09 05:34:32.528842 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-09 05:34:32.528852 | orchestrator | Monday 09 February 2026 05:32:13 +0000 (0:00:01.752) 0:01:57.424 ******* 2026-02-09 05:34:32.528862 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:34:32.528873 | orchestrator | 2026-02-09 05:34:32.528882 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-09 05:34:32.528916 | orchestrator | Monday 09 February 2026 05:32:40 +0000 (0:00:26.101) 0:02:23.525 ******* 2026-02-09 05:34:32.528928 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.528938 | orchestrator | 2026-02-09 05:34:32.528948 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-09 05:34:32.528959 | orchestrator | Monday 09 February 2026 05:32:45 +0000 (0:00:05.738) 0:02:29.264 ******* 2026-02-09 05:34:32.528971 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.528981 | orchestrator | 2026-02-09 05:34:32.528993 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-09 05:34:32.529005 | orchestrator | 2026-02-09 05:34:32.529016 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-09 05:34:32.529026 | orchestrator | Monday 09 February 2026 05:32:48 +0000 (0:00:03.094) 0:02:32.358 ******* 2026-02-09 05:34:32.529053 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:34:32.529065 | orchestrator | 2026-02-09 05:34:32.529076 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-09 05:34:32.529108 | orchestrator | Monday 09 February 2026 05:33:15 +0000 (0:00:26.278) 0:02:58.636 ******* 2026-02-09 05:34:32.529121 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:34:32.529133 | orchestrator | 2026-02-09 05:34:32.529143 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-09 05:34:32.529155 | orchestrator | Monday 09 February 2026 05:33:20 +0000 (0:00:05.632) 0:03:04.269 ******* 2026-02-09 05:34:32.529166 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:34:32.529177 | orchestrator | 2026-02-09 05:34:32.529187 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-09 05:34:32.529198 | orchestrator | 2026-02-09 05:34:32.529210 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-09 05:34:32.529220 | orchestrator | Monday 09 February 2026 05:33:24 +0000 (0:00:03.562) 0:03:07.831 ******* 2026-02-09 05:34:32.529231 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:34:32.529257 | orchestrator | 2026-02-09 05:34:32.529269 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-09 05:34:32.529280 | orchestrator | Monday 09 February 2026 05:33:49 +0000 (0:00:25.602) 0:03:33.434 ******* 2026-02-09 05:34:32.529291 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-02-09 05:34:32.529303 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:34:32.529314 | orchestrator | 2026-02-09 05:34:32.529325 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-09 05:34:32.529336 | orchestrator | Monday 09 February 2026 05:33:57 +0000 (0:00:08.043) 0:03:41.478 ******* 2026-02-09 05:34:32.529347 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-09 05:34:32.529358 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-09 05:34:32.529370 | orchestrator | mariadb_bootstrap_restart 2026-02-09 05:34:32.529378 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:34:32.529384 | orchestrator | 2026-02-09 05:34:32.529391 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-09 05:34:32.529402 | orchestrator | skipping: no hosts matched 2026-02-09 05:34:32.529413 | orchestrator | 2026-02-09 05:34:32.529424 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-09 05:34:32.529435 | orchestrator | skipping: no hosts matched 2026-02-09 05:34:32.529446 | orchestrator | 2026-02-09 05:34:32.529457 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-09 05:34:32.529467 | orchestrator | 2026-02-09 05:34:32.529478 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-09 05:34:32.529490 | orchestrator | Monday 09 February 2026 05:34:01 +0000 (0:00:04.024) 0:03:45.503 ******* 2026-02-09 05:34:32.529501 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:34:32.529512 | orchestrator | 2026-02-09 05:34:32.529523 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-09 05:34:32.529533 | orchestrator | Monday 09 February 2026 05:34:04 +0000 (0:00:02.031) 0:03:47.534 ******* 2026-02-09 05:34:32.529545 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:34:32.529556 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:34:32.529567 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.529578 | orchestrator | 2026-02-09 05:34:32.529589 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-09 05:34:32.529600 | orchestrator | Monday 09 February 2026 05:34:07 +0000 (0:00:03.099) 0:03:50.634 ******* 2026-02-09 05:34:32.529611 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:34:32.529622 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:34:32.529633 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:34:32.529644 | orchestrator | 2026-02-09 05:34:32.529656 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-09 05:34:32.529668 | orchestrator | Monday 09 February 2026 05:34:10 +0000 (0:00:03.219) 0:03:53.853 ******* 2026-02-09 05:34:32.529678 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:34:32.529690 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:34:32.529700 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.529711 | orchestrator | 2026-02-09 05:34:32.529722 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-09 05:34:32.529731 | orchestrator | Monday 09 February 2026 05:34:13 +0000 (0:00:03.112) 0:03:56.966 ******* 2026-02-09 05:34:32.529741 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:34:32.529750 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:34:32.529759 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:34:32.529769 | orchestrator | 2026-02-09 05:34:32.529778 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-09 05:34:32.529788 | orchestrator | Monday 09 February 2026 05:34:16 +0000 (0:00:03.505) 0:04:00.471 ******* 2026-02-09 05:34:32.529798 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.529807 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:34:32.529825 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:34:32.529835 | orchestrator | 2026-02-09 05:34:32.529845 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-09 05:34:32.529856 | orchestrator | Monday 09 February 2026 05:34:23 +0000 (0:00:06.796) 0:04:07.268 ******* 2026-02-09 05:34:32.529867 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:34:32.529877 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:34:32.529909 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:34:32.529920 | orchestrator | 2026-02-09 05:34:32.529930 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-09 05:34:32.529940 | orchestrator | Monday 09 February 2026 05:34:27 +0000 (0:00:03.738) 0:04:11.006 ******* 2026-02-09 05:34:32.529951 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:34:32.529961 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:34:32.529972 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:34:32.529982 | orchestrator | 2026-02-09 05:34:32.529992 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-09 05:34:32.530002 | orchestrator | Monday 09 February 2026 05:34:29 +0000 (0:00:01.588) 0:04:12.595 ******* 2026-02-09 05:34:32.530012 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:34:32.530084 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:34:32.530103 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:34:32.530114 | orchestrator | 2026-02-09 05:34:32.530125 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-09 05:34:32.530147 | orchestrator | Monday 09 February 2026 05:34:32 +0000 (0:00:03.421) 0:04:16.016 ******* 2026-02-09 05:34:53.114481 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:34:53.114617 | orchestrator | 2026-02-09 05:34:53.114642 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-09 05:34:53.114658 | orchestrator | Monday 09 February 2026 05:34:34 +0000 (0:00:02.062) 0:04:18.079 ******* 2026-02-09 05:34:53.114674 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:34:53.114685 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:34:53.114694 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:34:53.114703 | orchestrator | 2026-02-09 05:34:53.114712 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:34:53.114722 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-09 05:34:53.114733 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-09 05:34:53.114742 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-09 05:34:53.114750 | orchestrator | 2026-02-09 05:34:53.114759 | orchestrator | 2026-02-09 05:34:53.114767 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:34:53.114776 | orchestrator | Monday 09 February 2026 05:34:52 +0000 (0:00:18.006) 0:04:36.085 ******* 2026-02-09 05:34:53.114785 | orchestrator | =============================================================================== 2026-02-09 05:34:53.114793 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 77.98s 2026-02-09 05:34:53.114802 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 19.41s 2026-02-09 05:34:53.114811 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 18.01s 2026-02-09 05:34:53.114819 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.68s 2026-02-09 05:34:53.114828 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.80s 2026-02-09 05:34:53.114836 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.84s 2026-02-09 05:34:53.114845 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.57s 2026-02-09 05:34:53.114854 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.52s 2026-02-09 05:34:53.114932 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.44s 2026-02-09 05:34:53.115024 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.20s 2026-02-09 05:34:53.115089 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.94s 2026-02-09 05:34:53.115100 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.74s 2026-02-09 05:34:53.115110 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.63s 2026-02-09 05:34:53.115121 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.52s 2026-02-09 05:34:53.115131 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.51s 2026-02-09 05:34:53.115141 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.51s 2026-02-09 05:34:53.115152 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.45s 2026-02-09 05:34:53.115162 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.45s 2026-02-09 05:34:53.115173 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.42s 2026-02-09 05:34:53.115184 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.22s 2026-02-09 05:34:53.452202 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-09 05:34:55.594527 | orchestrator | 2026-02-09 05:34:55 | INFO  | Task 93c3fdad-978f-4e06-9048-df10c015061e (rabbitmq) was prepared for execution. 2026-02-09 05:34:55.594645 | orchestrator | 2026-02-09 05:34:55 | INFO  | It takes a moment until task 93c3fdad-978f-4e06-9048-df10c015061e (rabbitmq) has been started and output is visible here. 2026-02-09 05:35:24.827210 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-09 05:35:24.827328 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-09 05:35:24.827359 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-09 05:35:24.827370 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-09 05:35:24.827392 | orchestrator | 2026-02-09 05:35:24.827404 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 05:35:24.827415 | orchestrator | 2026-02-09 05:35:24.827425 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 05:35:24.827436 | orchestrator | Monday 09 February 2026 05:35:00 +0000 (0:00:00.991) 0:00:00.991 ******* 2026-02-09 05:35:24.827447 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:35:24.827459 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:35:24.827485 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:35:24.827496 | orchestrator | 2026-02-09 05:35:24.827507 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 05:35:24.827518 | orchestrator | Monday 09 February 2026 05:35:01 +0000 (0:00:00.981) 0:00:01.973 ******* 2026-02-09 05:35:24.827529 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-09 05:35:24.827541 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-09 05:35:24.827551 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-09 05:35:24.827562 | orchestrator | 2026-02-09 05:35:24.827573 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-09 05:35:24.827584 | orchestrator | 2026-02-09 05:35:24.827594 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-09 05:35:24.827605 | orchestrator | Monday 09 February 2026 05:35:02 +0000 (0:00:01.154) 0:00:03.127 ******* 2026-02-09 05:35:24.827616 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:35:24.827650 | orchestrator | 2026-02-09 05:35:24.827662 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-09 05:35:24.827673 | orchestrator | Monday 09 February 2026 05:35:04 +0000 (0:00:01.253) 0:00:04.380 ******* 2026-02-09 05:35:24.827683 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:35:24.827694 | orchestrator | 2026-02-09 05:35:24.827705 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-09 05:35:24.827715 | orchestrator | Monday 09 February 2026 05:35:05 +0000 (0:00:01.299) 0:00:05.680 ******* 2026-02-09 05:35:24.827726 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:35:24.827737 | orchestrator | 2026-02-09 05:35:24.827748 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-09 05:35:24.827762 | orchestrator | Monday 09 February 2026 05:35:07 +0000 (0:00:02.301) 0:00:07.981 ******* 2026-02-09 05:35:24.827774 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:35:24.827787 | orchestrator | 2026-02-09 05:35:24.827801 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-09 05:35:24.827813 | orchestrator | Monday 09 February 2026 05:35:16 +0000 (0:00:08.491) 0:00:16.473 ******* 2026-02-09 05:35:24.827825 | orchestrator | ok: [testbed-node-0] => { 2026-02-09 05:35:24.827838 | orchestrator |  "changed": false, 2026-02-09 05:35:24.827851 | orchestrator |  "msg": "All assertions passed" 2026-02-09 05:35:24.827865 | orchestrator | } 2026-02-09 05:35:24.827878 | orchestrator | 2026-02-09 05:35:24.827890 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-09 05:35:24.827931 | orchestrator | Monday 09 February 2026 05:35:16 +0000 (0:00:00.338) 0:00:16.812 ******* 2026-02-09 05:35:24.827945 | orchestrator | ok: [testbed-node-0] => { 2026-02-09 05:35:24.827957 | orchestrator |  "changed": false, 2026-02-09 05:35:24.827969 | orchestrator |  "msg": "All assertions passed" 2026-02-09 05:35:24.827982 | orchestrator | } 2026-02-09 05:35:24.827995 | orchestrator | 2026-02-09 05:35:24.828007 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-09 05:35:24.828018 | orchestrator | Monday 09 February 2026 05:35:17 +0000 (0:00:00.691) 0:00:17.503 ******* 2026-02-09 05:35:24.828029 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:35:24.828040 | orchestrator | 2026-02-09 05:35:24.828050 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-09 05:35:24.828061 | orchestrator | Monday 09 February 2026 05:35:18 +0000 (0:00:00.996) 0:00:18.500 ******* 2026-02-09 05:35:24.828072 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:35:24.828082 | orchestrator | 2026-02-09 05:35:24.828093 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-09 05:35:24.828103 | orchestrator | Monday 09 February 2026 05:35:19 +0000 (0:00:01.174) 0:00:19.674 ******* 2026-02-09 05:35:24.828114 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:35:24.828125 | orchestrator | 2026-02-09 05:35:24.828135 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-09 05:35:24.828146 | orchestrator | Monday 09 February 2026 05:35:21 +0000 (0:00:02.006) 0:00:21.681 ******* 2026-02-09 05:35:24.828157 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:35:24.828167 | orchestrator | 2026-02-09 05:35:24.828178 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-09 05:35:24.828189 | orchestrator | Monday 09 February 2026 05:35:22 +0000 (0:00:01.145) 0:00:22.826 ******* 2026-02-09 05:35:24.828226 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:35:24.828260 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:35:24.828274 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:35:24.828286 | orchestrator | 2026-02-09 05:35:24.828297 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-09 05:35:24.828308 | orchestrator | Monday 09 February 2026 05:35:23 +0000 (0:00:00.772) 0:00:23.599 ******* 2026-02-09 05:35:24.828327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:35:36.737561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:35:36.737690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:35:36.737711 | orchestrator | 2026-02-09 05:35:36.737726 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-09 05:35:36.737739 | orchestrator | Monday 09 February 2026 05:35:24 +0000 (0:00:01.408) 0:00:25.007 ******* 2026-02-09 05:35:36.737750 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-09 05:35:36.737762 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-09 05:35:36.737773 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-09 05:35:36.737784 | orchestrator | 2026-02-09 05:35:36.737795 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-09 05:35:36.737806 | orchestrator | Monday 09 February 2026 05:35:26 +0000 (0:00:01.476) 0:00:26.484 ******* 2026-02-09 05:35:36.737817 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-09 05:35:36.737827 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-09 05:35:36.737838 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-09 05:35:36.737848 | orchestrator | 2026-02-09 05:35:36.737859 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-09 05:35:36.737870 | orchestrator | Monday 09 February 2026 05:35:28 +0000 (0:00:02.181) 0:00:28.665 ******* 2026-02-09 05:35:36.737880 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-09 05:35:36.737891 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-09 05:35:36.737936 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-09 05:35:36.737974 | orchestrator | 2026-02-09 05:35:36.737985 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-09 05:35:36.737996 | orchestrator | Monday 09 February 2026 05:35:29 +0000 (0:00:01.358) 0:00:30.024 ******* 2026-02-09 05:35:36.738006 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-09 05:35:36.738073 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-09 05:35:36.738087 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-09 05:35:36.738100 | orchestrator | 2026-02-09 05:35:36.738123 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-09 05:35:36.738155 | orchestrator | Monday 09 February 2026 05:35:31 +0000 (0:00:01.427) 0:00:31.452 ******* 2026-02-09 05:35:36.738173 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-09 05:35:36.738193 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-09 05:35:36.738214 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-09 05:35:36.738234 | orchestrator | 2026-02-09 05:35:36.738251 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-09 05:35:36.738265 | orchestrator | Monday 09 February 2026 05:35:32 +0000 (0:00:01.365) 0:00:32.817 ******* 2026-02-09 05:35:36.738278 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-09 05:35:36.738290 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-09 05:35:36.738303 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-09 05:35:36.738316 | orchestrator | 2026-02-09 05:35:36.738336 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-09 05:35:36.738347 | orchestrator | Monday 09 February 2026 05:35:34 +0000 (0:00:01.594) 0:00:34.412 ******* 2026-02-09 05:35:36.738358 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:35:36.738369 | orchestrator | 2026-02-09 05:35:36.738380 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-09 05:35:36.738390 | orchestrator | Monday 09 February 2026 05:35:35 +0000 (0:00:00.999) 0:00:35.412 ******* 2026-02-09 05:35:36.738403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:35:36.738416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:35:36.738450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:35:42.546726 | orchestrator | 2026-02-09 05:35:42.546851 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-09 05:35:42.546875 | orchestrator | Monday 09 February 2026 05:35:36 +0000 (0:00:01.496) 0:00:36.908 ******* 2026-02-09 05:35:42.547003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:35:42.547031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:35:42.547077 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:35:42.547095 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:35:42.547113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:35:42.547131 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:35:42.547148 | orchestrator | 2026-02-09 05:35:42.547165 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-09 05:35:42.547182 | orchestrator | Monday 09 February 2026 05:35:37 +0000 (0:00:00.442) 0:00:37.351 ******* 2026-02-09 05:35:42.547230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:35:42.547249 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:35:42.547267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:35:42.547285 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:35:42.547302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:35:42.547330 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:35:42.547347 | orchestrator | 2026-02-09 05:35:42.547364 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-09 05:35:42.547380 | orchestrator | Monday 09 February 2026 05:35:38 +0000 (0:00:01.072) 0:00:38.424 ******* 2026-02-09 05:35:42.547397 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:35:42.547415 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:35:42.547432 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:35:42.547448 | orchestrator | 2026-02-09 05:35:42.547464 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-09 05:35:42.547481 | orchestrator | Monday 09 February 2026 05:35:41 +0000 (0:00:03.130) 0:00:41.554 ******* 2026-02-09 05:35:42.547509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:36:35.851512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:36:35.851619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-09 05:36:35.851653 | orchestrator | 2026-02-09 05:36:35.851665 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-09 05:36:35.851675 | orchestrator | Monday 09 February 2026 05:35:42 +0000 (0:00:01.179) 0:00:42.733 ******* 2026-02-09 05:36:35.851685 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:36:35.851694 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:36:35.851703 | orchestrator | } 2026-02-09 05:36:35.851712 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:36:35.851721 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:36:35.851729 | orchestrator | } 2026-02-09 05:36:35.851738 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:36:35.851746 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:36:35.851755 | orchestrator | } 2026-02-09 05:36:35.851763 | orchestrator | 2026-02-09 05:36:35.851772 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:36:35.851782 | orchestrator | Monday 09 February 2026 05:35:42 +0000 (0:00:00.361) 0:00:43.095 ******* 2026-02-09 05:36:35.851791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:36:35.851801 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:36:35.851810 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-09 05:36:35.851819 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-09 05:36:35.851858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:36:35.851875 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:36:35.851885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-09 05:36:35.851894 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:36:35.851953 | orchestrator | 2026-02-09 05:36:35.851962 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-09 05:36:35.851971 | orchestrator | Monday 09 February 2026 05:35:44 +0000 (0:00:01.388) 0:00:44.484 ******* 2026-02-09 05:36:35.851980 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:36:35.851989 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:36:35.851997 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:36:35.852006 | orchestrator | 2026-02-09 05:36:35.852014 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-09 05:36:35.852023 | orchestrator | 2026-02-09 05:36:35.852032 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-09 05:36:35.852041 | orchestrator | Monday 09 February 2026 05:35:45 +0000 (0:00:01.095) 0:00:45.579 ******* 2026-02-09 05:36:35.852050 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:36:35.852059 | orchestrator | 2026-02-09 05:36:35.852067 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-09 05:36:35.852076 | orchestrator | Monday 09 February 2026 05:35:46 +0000 (0:00:01.058) 0:00:46.638 ******* 2026-02-09 05:36:35.852085 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:36:35.852093 | orchestrator | 2026-02-09 05:36:35.852102 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-09 05:36:35.852110 | orchestrator | Monday 09 February 2026 05:35:54 +0000 (0:00:08.273) 0:00:54.911 ******* 2026-02-09 05:36:35.852119 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:36:35.852128 | orchestrator | 2026-02-09 05:36:35.852136 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-09 05:36:35.852145 | orchestrator | Monday 09 February 2026 05:36:02 +0000 (0:00:08.008) 0:01:02.919 ******* 2026-02-09 05:36:35.852154 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:36:35.852162 | orchestrator | 2026-02-09 05:36:35.852171 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-09 05:36:35.852180 | orchestrator | 2026-02-09 05:36:35.852188 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-09 05:36:35.852197 | orchestrator | Monday 09 February 2026 05:36:12 +0000 (0:00:09.851) 0:01:12.770 ******* 2026-02-09 05:36:35.852206 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:36:35.852220 | orchestrator | 2026-02-09 05:36:35.852229 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-09 05:36:35.852238 | orchestrator | Monday 09 February 2026 05:36:13 +0000 (0:00:01.112) 0:01:13.883 ******* 2026-02-09 05:36:35.852246 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:36:35.852255 | orchestrator | 2026-02-09 05:36:35.852263 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-09 05:36:35.852272 | orchestrator | Monday 09 February 2026 05:36:22 +0000 (0:00:08.654) 0:01:22.537 ******* 2026-02-09 05:36:35.852286 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:37:23.965669 | orchestrator | 2026-02-09 05:37:23.965771 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-09 05:37:23.965802 | orchestrator | Monday 09 February 2026 05:36:35 +0000 (0:00:13.491) 0:01:36.028 ******* 2026-02-09 05:37:23.965813 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:37:23.965823 | orchestrator | 2026-02-09 05:37:23.965832 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-09 05:37:23.965841 | orchestrator | 2026-02-09 05:37:23.965850 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-09 05:37:23.965859 | orchestrator | Monday 09 February 2026 05:36:46 +0000 (0:00:10.844) 0:01:46.873 ******* 2026-02-09 05:37:23.965867 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:37:23.965877 | orchestrator | 2026-02-09 05:37:23.965886 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-09 05:37:23.965894 | orchestrator | Monday 09 February 2026 05:36:47 +0000 (0:00:01.300) 0:01:48.173 ******* 2026-02-09 05:37:23.965903 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:37:23.965957 | orchestrator | 2026-02-09 05:37:23.965966 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-09 05:37:23.965975 | orchestrator | Monday 09 February 2026 05:36:56 +0000 (0:00:08.545) 0:01:56.719 ******* 2026-02-09 05:37:23.965984 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:37:23.965992 | orchestrator | 2026-02-09 05:37:23.966000 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-09 05:37:23.966009 | orchestrator | Monday 09 February 2026 05:37:09 +0000 (0:00:13.000) 0:02:09.719 ******* 2026-02-09 05:37:23.966064 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:37:23.966073 | orchestrator | 2026-02-09 05:37:23.966082 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-09 05:37:23.966091 | orchestrator | 2026-02-09 05:37:23.966099 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-09 05:37:23.966108 | orchestrator | Monday 09 February 2026 05:37:19 +0000 (0:00:09.806) 0:02:19.526 ******* 2026-02-09 05:37:23.966117 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:37:23.966125 | orchestrator | 2026-02-09 05:37:23.966134 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-09 05:37:23.966142 | orchestrator | Monday 09 February 2026 05:37:19 +0000 (0:00:00.578) 0:02:20.104 ******* 2026-02-09 05:37:23.966151 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:37:23.966160 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:37:23.966168 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:37:23.966177 | orchestrator | 2026-02-09 05:37:23.966186 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:37:23.966196 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 05:37:23.966206 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 05:37:23.966214 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-09 05:37:23.966223 | orchestrator | 2026-02-09 05:37:23.966232 | orchestrator | 2026-02-09 05:37:23.966263 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:37:23.966272 | orchestrator | Monday 09 February 2026 05:37:23 +0000 (0:00:03.606) 0:02:23.710 ******* 2026-02-09 05:37:23.966281 | orchestrator | =============================================================================== 2026-02-09 05:37:23.966289 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 34.50s 2026-02-09 05:37:23.966298 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 30.50s 2026-02-09 05:37:23.966306 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 25.47s 2026-02-09 05:37:23.966314 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 8.49s 2026-02-09 05:37:23.966323 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.61s 2026-02-09 05:37:23.966331 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 3.47s 2026-02-09 05:37:23.966340 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.13s 2026-02-09 05:37:23.966348 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 2.30s 2026-02-09 05:37:23.966356 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.18s 2026-02-09 05:37:23.966365 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.01s 2026-02-09 05:37:23.966373 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.59s 2026-02-09 05:37:23.966382 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.50s 2026-02-09 05:37:23.966390 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.48s 2026-02-09 05:37:23.966398 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.43s 2026-02-09 05:37:23.966407 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.41s 2026-02-09 05:37:23.966416 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.39s 2026-02-09 05:37:23.966424 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.37s 2026-02-09 05:37:23.966432 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.36s 2026-02-09 05:37:23.966441 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.30s 2026-02-09 05:37:23.966449 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.25s 2026-02-09 05:37:24.327878 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-09 05:37:26.537960 | orchestrator | 2026-02-09 05:37:26 | INFO  | Task 6772daa9-4137-4f27-a6a9-607382780bb4 (openvswitch) was prepared for execution. 2026-02-09 05:37:26.538121 | orchestrator | 2026-02-09 05:37:26 | INFO  | It takes a moment until task 6772daa9-4137-4f27-a6a9-607382780bb4 (openvswitch) has been started and output is visible here. 2026-02-09 05:37:44.835321 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-09 05:37:44.835434 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-09 05:37:44.835462 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-09 05:37:44.835474 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-09 05:37:44.835496 | orchestrator | 2026-02-09 05:37:44.835508 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 05:37:44.835519 | orchestrator | 2026-02-09 05:37:44.835530 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 05:37:44.835540 | orchestrator | Monday 09 February 2026 05:37:32 +0000 (0:00:01.488) 0:00:01.488 ******* 2026-02-09 05:37:44.835551 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:37:44.835563 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:37:44.835619 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:37:44.835642 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:37:44.835653 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:37:44.835664 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:37:44.835675 | orchestrator | 2026-02-09 05:37:44.835686 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 05:37:44.835697 | orchestrator | Monday 09 February 2026 05:37:33 +0000 (0:00:01.405) 0:00:02.894 ******* 2026-02-09 05:37:44.835708 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 05:37:44.835718 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 05:37:44.835729 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 05:37:44.835740 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 05:37:44.835750 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 05:37:44.835761 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-09 05:37:44.835772 | orchestrator | 2026-02-09 05:37:44.835783 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-09 05:37:44.835793 | orchestrator | 2026-02-09 05:37:44.835804 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-09 05:37:44.835815 | orchestrator | Monday 09 February 2026 05:37:34 +0000 (0:00:01.090) 0:00:03.984 ******* 2026-02-09 05:37:44.835827 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 05:37:44.835839 | orchestrator | 2026-02-09 05:37:44.835850 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-09 05:37:44.835861 | orchestrator | Monday 09 February 2026 05:37:37 +0000 (0:00:02.634) 0:00:06.618 ******* 2026-02-09 05:37:44.835872 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-09 05:37:44.835883 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-09 05:37:44.835894 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-09 05:37:44.835904 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-09 05:37:44.835944 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-09 05:37:44.835963 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-09 05:37:44.835982 | orchestrator | 2026-02-09 05:37:44.836000 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-09 05:37:44.836017 | orchestrator | Monday 09 February 2026 05:37:38 +0000 (0:00:01.262) 0:00:07.881 ******* 2026-02-09 05:37:44.836028 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-09 05:37:44.836039 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-09 05:37:44.836049 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-09 05:37:44.836060 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-09 05:37:44.836071 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-09 05:37:44.836081 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-09 05:37:44.836092 | orchestrator | 2026-02-09 05:37:44.836103 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-09 05:37:44.836245 | orchestrator | Monday 09 February 2026 05:37:40 +0000 (0:00:01.498) 0:00:09.380 ******* 2026-02-09 05:37:44.836279 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-09 05:37:44.836297 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:37:44.836314 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-09 05:37:44.836331 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:37:44.836349 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-09 05:37:44.836368 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:37:44.836380 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-09 05:37:44.836403 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:37:44.836414 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-09 05:37:44.836424 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:37:44.836435 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-09 05:37:44.836446 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:37:44.836456 | orchestrator | 2026-02-09 05:37:44.836467 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-09 05:37:44.836483 | orchestrator | Monday 09 February 2026 05:37:42 +0000 (0:00:01.875) 0:00:11.255 ******* 2026-02-09 05:37:44.836494 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:37:44.836505 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:37:44.836515 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:37:44.836526 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:37:44.836536 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:37:44.836569 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:37:44.836580 | orchestrator | 2026-02-09 05:37:44.836591 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-09 05:37:44.836602 | orchestrator | Monday 09 February 2026 05:37:43 +0000 (0:00:01.017) 0:00:12.272 ******* 2026-02-09 05:37:44.836622 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:44.836655 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:44.836680 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:44.836698 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:44.836728 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:44.836770 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:47.112525 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:47.112659 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:47.112677 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:47.112689 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:47.112743 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:47.112777 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:47.112791 | orchestrator | 2026-02-09 05:37:47.112804 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-09 05:37:47.112817 | orchestrator | Monday 09 February 2026 05:37:44 +0000 (0:00:01.722) 0:00:13.994 ******* 2026-02-09 05:37:47.112828 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:47.112843 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:47.112864 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:47.112894 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:47.113017 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:47.113048 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:50.661092 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:50.661185 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:50.661218 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:50.661240 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:50.661247 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:50.661270 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:50.661278 | orchestrator | 2026-02-09 05:37:50.661286 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-09 05:37:50.661295 | orchestrator | Monday 09 February 2026 05:37:47 +0000 (0:00:02.404) 0:00:16.399 ******* 2026-02-09 05:37:50.661302 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:37:50.661309 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:37:50.661316 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:37:50.661322 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:37:50.661329 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:37:50.661335 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:37:50.661342 | orchestrator | 2026-02-09 05:37:50.661349 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-09 05:37:50.661356 | orchestrator | Monday 09 February 2026 05:37:48 +0000 (0:00:01.414) 0:00:17.813 ******* 2026-02-09 05:37:50.661363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:50.661378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:50.661389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:50.661397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:50.661411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:52.012607 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-09 05:37:52.012730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:52.012748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:52.012774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:52.012785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:52.012813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:52.012832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-09 05:37:52.012843 | orchestrator | 2026-02-09 05:37:52.012855 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-09 05:37:52.012866 | orchestrator | Monday 09 February 2026 05:37:50 +0000 (0:00:02.130) 0:00:19.943 ******* 2026-02-09 05:37:52.012876 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:37:52.012887 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:37:52.012897 | orchestrator | } 2026-02-09 05:37:52.012907 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:37:52.012916 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:37:52.013001 | orchestrator | } 2026-02-09 05:37:52.013012 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:37:52.013021 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:37:52.013031 | orchestrator | } 2026-02-09 05:37:52.013041 | orchestrator | changed: [testbed-node-3] => { 2026-02-09 05:37:52.013050 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:37:52.013060 | orchestrator | } 2026-02-09 05:37:52.013069 | orchestrator | changed: [testbed-node-4] => { 2026-02-09 05:37:52.013079 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:37:52.013088 | orchestrator | } 2026-02-09 05:37:52.013098 | orchestrator | changed: [testbed-node-5] => { 2026-02-09 05:37:52.013107 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:37:52.013117 | orchestrator | } 2026-02-09 05:37:52.013126 | orchestrator | 2026-02-09 05:37:52.013136 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:37:52.013148 | orchestrator | Monday 09 February 2026 05:37:51 +0000 (0:00:00.914) 0:00:20.858 ******* 2026-02-09 05:37:52.013166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-09 05:37:52.013185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-09 05:37:52.013204 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:37:52.013221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-09 05:37:52.013260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-09 05:38:16.606213 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:38:16.606345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-09 05:38:16.606368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-09 05:38:16.606382 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:38:16.606411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-09 05:38:16.606424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-09 05:38:16.606458 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-09 05:38:16.606471 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-09 05:38:16.606494 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:38:16.606505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-09 05:38:16.606538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-09 05:38:16.606554 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:38:16.606567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-09 05:38:16.606587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-09 05:38:16.606600 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:38:16.606614 | orchestrator | 2026-02-09 05:38:16.606627 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 05:38:16.606648 | orchestrator | Monday 09 February 2026 05:37:53 +0000 (0:00:01.842) 0:00:22.701 ******* 2026-02-09 05:38:16.606661 | orchestrator | 2026-02-09 05:38:16.606675 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 05:38:16.606686 | orchestrator | Monday 09 February 2026 05:37:53 +0000 (0:00:00.162) 0:00:22.863 ******* 2026-02-09 05:38:16.606699 | orchestrator | 2026-02-09 05:38:16.606712 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 05:38:16.606724 | orchestrator | Monday 09 February 2026 05:37:53 +0000 (0:00:00.164) 0:00:23.028 ******* 2026-02-09 05:38:16.606737 | orchestrator | 2026-02-09 05:38:16.606749 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 05:38:16.606762 | orchestrator | Monday 09 February 2026 05:37:54 +0000 (0:00:00.165) 0:00:23.193 ******* 2026-02-09 05:38:16.606775 | orchestrator | 2026-02-09 05:38:16.606787 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 05:38:16.606800 | orchestrator | Monday 09 February 2026 05:37:54 +0000 (0:00:00.483) 0:00:23.677 ******* 2026-02-09 05:38:16.606812 | orchestrator | 2026-02-09 05:38:16.606824 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-09 05:38:16.606836 | orchestrator | Monday 09 February 2026 05:37:54 +0000 (0:00:00.155) 0:00:23.832 ******* 2026-02-09 05:38:16.606849 | orchestrator | 2026-02-09 05:38:16.606861 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-09 05:38:16.606874 | orchestrator | Monday 09 February 2026 05:37:54 +0000 (0:00:00.156) 0:00:23.989 ******* 2026-02-09 05:38:16.606887 | orchestrator | changed: [testbed-node-3] 2026-02-09 05:38:16.606898 | orchestrator | changed: [testbed-node-4] 2026-02-09 05:38:16.606909 | orchestrator | changed: [testbed-node-5] 2026-02-09 05:38:16.606920 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:38:16.606930 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:38:16.606941 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:38:16.606981 | orchestrator | 2026-02-09 05:38:16.606993 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-09 05:38:16.607004 | orchestrator | Monday 09 February 2026 05:38:05 +0000 (0:00:10.854) 0:00:34.843 ******* 2026-02-09 05:38:16.607015 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:38:16.607026 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:38:16.607037 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:38:16.607047 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:38:16.607058 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:38:16.607068 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:38:16.607079 | orchestrator | 2026-02-09 05:38:16.607089 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-09 05:38:16.607100 | orchestrator | Monday 09 February 2026 05:38:06 +0000 (0:00:01.161) 0:00:36.004 ******* 2026-02-09 05:38:16.607111 | orchestrator | changed: [testbed-node-3] 2026-02-09 05:38:16.607129 | orchestrator | changed: [testbed-node-4] 2026-02-09 05:38:29.874748 | orchestrator | changed: [testbed-node-5] 2026-02-09 05:38:29.874858 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:38:29.874872 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:38:29.874883 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:38:29.874893 | orchestrator | 2026-02-09 05:38:29.874904 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-09 05:38:29.874916 | orchestrator | Monday 09 February 2026 05:38:16 +0000 (0:00:09.761) 0:00:45.766 ******* 2026-02-09 05:38:29.874927 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-09 05:38:29.874938 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-09 05:38:29.874947 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-09 05:38:29.874957 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-09 05:38:29.875047 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-09 05:38:29.875060 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-09 05:38:29.875069 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-09 05:38:29.875078 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-09 05:38:29.875088 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-09 05:38:29.875097 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-09 05:38:29.875107 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-09 05:38:29.875116 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-09 05:38:29.875141 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 05:38:29.875151 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 05:38:29.875160 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 05:38:29.875170 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 05:38:29.875179 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 05:38:29.875188 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-09 05:38:29.875198 | orchestrator | 2026-02-09 05:38:29.875208 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-09 05:38:29.875217 | orchestrator | Monday 09 February 2026 05:38:23 +0000 (0:00:06.442) 0:00:52.209 ******* 2026-02-09 05:38:29.875227 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-09 05:38:29.875237 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:38:29.875246 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-09 05:38:29.875255 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:38:29.875264 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-09 05:38:29.875274 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:38:29.875283 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-09 05:38:29.875293 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-09 05:38:29.875302 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-09 05:38:29.875312 | orchestrator | 2026-02-09 05:38:29.875321 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-09 05:38:29.875331 | orchestrator | Monday 09 February 2026 05:38:25 +0000 (0:00:02.227) 0:00:54.436 ******* 2026-02-09 05:38:29.875341 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-09 05:38:29.875384 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:38:29.875395 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-09 05:38:29.875405 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:38:29.875414 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-09 05:38:29.875424 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:38:29.875433 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-09 05:38:29.875443 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-09 05:38:29.875452 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-09 05:38:29.875462 | orchestrator | 2026-02-09 05:38:29.875483 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:38:29.875494 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 05:38:29.875505 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 05:38:29.875531 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-09 05:38:29.875542 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:38:29.875551 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:38:29.875561 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:38:29.875570 | orchestrator | 2026-02-09 05:38:29.875580 | orchestrator | 2026-02-09 05:38:29.875590 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:38:29.875599 | orchestrator | Monday 09 February 2026 05:38:29 +0000 (0:00:04.141) 0:00:58.578 ******* 2026-02-09 05:38:29.875609 | orchestrator | =============================================================================== 2026-02-09 05:38:29.875618 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.85s 2026-02-09 05:38:29.875628 | orchestrator | openvswitch : Restart openvswitch-vswitchd container -------------------- 9.76s 2026-02-09 05:38:29.875637 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.44s 2026-02-09 05:38:29.875647 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.14s 2026-02-09 05:38:29.875656 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.63s 2026-02-09 05:38:29.875665 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.40s 2026-02-09 05:38:29.875675 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.23s 2026-02-09 05:38:29.875684 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.13s 2026-02-09 05:38:29.875693 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.88s 2026-02-09 05:38:29.875703 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.84s 2026-02-09 05:38:29.875718 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.72s 2026-02-09 05:38:29.875727 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.50s 2026-02-09 05:38:29.875737 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.41s 2026-02-09 05:38:29.875746 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.41s 2026-02-09 05:38:29.875756 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.29s 2026-02-09 05:38:29.875765 | orchestrator | module-load : Load modules ---------------------------------------------- 1.26s 2026-02-09 05:38:29.875775 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.16s 2026-02-09 05:38:29.875784 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.09s 2026-02-09 05:38:29.875793 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.02s 2026-02-09 05:38:29.875803 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.91s 2026-02-09 05:38:30.223636 | orchestrator | + osism apply -a upgrade ovn 2026-02-09 05:38:32.521453 | orchestrator | 2026-02-09 05:38:32 | INFO  | Task c1634523-ff3b-45fc-aa8f-d01f9ef38482 (ovn) was prepared for execution. 2026-02-09 05:38:32.521554 | orchestrator | 2026-02-09 05:38:32 | INFO  | It takes a moment until task c1634523-ff3b-45fc-aa8f-d01f9ef38482 (ovn) has been started and output is visible here. 2026-02-09 05:38:55.928855 | orchestrator | 2026-02-09 05:38:55.928973 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-09 05:38:55.928992 | orchestrator | 2026-02-09 05:38:55.929048 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-09 05:38:55.929060 | orchestrator | Monday 09 February 2026 05:38:38 +0000 (0:00:01.769) 0:00:01.769 ******* 2026-02-09 05:38:55.929072 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:38:55.929084 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:38:55.929096 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:38:55.929107 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:38:55.929118 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:38:55.929128 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:38:55.929139 | orchestrator | 2026-02-09 05:38:55.929150 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-09 05:38:55.929161 | orchestrator | Monday 09 February 2026 05:38:41 +0000 (0:00:02.701) 0:00:04.471 ******* 2026-02-09 05:38:55.929172 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-09 05:38:55.929184 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-09 05:38:55.929195 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-09 05:38:55.929206 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-09 05:38:55.929216 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-09 05:38:55.929227 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-09 05:38:55.929238 | orchestrator | 2026-02-09 05:38:55.929249 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-09 05:38:55.929260 | orchestrator | 2026-02-09 05:38:55.929271 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-09 05:38:55.929282 | orchestrator | Monday 09 February 2026 05:38:44 +0000 (0:00:02.678) 0:00:07.150 ******* 2026-02-09 05:38:55.929293 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 05:38:55.929306 | orchestrator | 2026-02-09 05:38:55.929317 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-09 05:38:55.929327 | orchestrator | Monday 09 February 2026 05:38:48 +0000 (0:00:04.005) 0:00:11.156 ******* 2026-02-09 05:38:55.929341 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929355 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929366 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929394 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929430 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929464 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929478 | orchestrator | 2026-02-09 05:38:55.929490 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-09 05:38:55.929503 | orchestrator | Monday 09 February 2026 05:38:50 +0000 (0:00:02.475) 0:00:13.631 ******* 2026-02-09 05:38:55.929517 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929546 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929560 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929573 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929586 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929599 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929620 | orchestrator | 2026-02-09 05:38:55.929639 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-09 05:38:55.929650 | orchestrator | Monday 09 February 2026 05:38:53 +0000 (0:00:02.694) 0:00:16.325 ******* 2026-02-09 05:38:55.929661 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929673 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:38:55.929691 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930143 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930230 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930237 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930242 | orchestrator | 2026-02-09 05:39:03.930254 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-09 05:39:03.930259 | orchestrator | Monday 09 February 2026 05:38:55 +0000 (0:00:02.526) 0:00:18.852 ******* 2026-02-09 05:39:03.930263 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930294 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930309 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930313 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930317 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930333 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930338 | orchestrator | 2026-02-09 05:39:03.930342 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-09 05:39:03.930345 | orchestrator | Monday 09 February 2026 05:38:59 +0000 (0:00:03.217) 0:00:22.070 ******* 2026-02-09 05:39:03.930350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:39:03.930382 | orchestrator | 2026-02-09 05:39:03.930386 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-09 05:39:03.930391 | orchestrator | Monday 09 February 2026 05:39:01 +0000 (0:00:02.605) 0:00:24.675 ******* 2026-02-09 05:39:03.930395 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:39:03.930400 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:39:03.930404 | orchestrator | } 2026-02-09 05:39:03.930408 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:39:03.930412 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:39:03.930415 | orchestrator | } 2026-02-09 05:39:03.930419 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:39:03.930423 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:39:03.930426 | orchestrator | } 2026-02-09 05:39:03.930430 | orchestrator | changed: [testbed-node-3] => { 2026-02-09 05:39:03.930434 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:39:03.930437 | orchestrator | } 2026-02-09 05:39:03.930441 | orchestrator | changed: [testbed-node-4] => { 2026-02-09 05:39:03.930445 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:39:03.930448 | orchestrator | } 2026-02-09 05:39:03.930452 | orchestrator | changed: [testbed-node-5] => { 2026-02-09 05:39:03.930456 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:39:03.930460 | orchestrator | } 2026-02-09 05:39:03.930464 | orchestrator | 2026-02-09 05:39:03.930467 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:39:03.930471 | orchestrator | Monday 09 February 2026 05:39:03 +0000 (0:00:02.067) 0:00:26.743 ******* 2026-02-09 05:39:03.930480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:39:34.223164 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:39:34.223297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:39:34.223321 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:39:34.223367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:39:34.223380 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:39:34.223392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:39:34.223403 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:39:34.223414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:39:34.223425 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:39:34.223452 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:39:34.223463 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:39:34.223475 | orchestrator | 2026-02-09 05:39:34.223486 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-09 05:39:34.223498 | orchestrator | Monday 09 February 2026 05:39:06 +0000 (0:00:02.513) 0:00:29.256 ******* 2026-02-09 05:39:34.223509 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:39:34.223521 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:39:34.223531 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:39:34.223541 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:39:34.223552 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:39:34.223562 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:39:34.223573 | orchestrator | 2026-02-09 05:39:34.223584 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-09 05:39:34.223594 | orchestrator | Monday 09 February 2026 05:39:09 +0000 (0:00:03.637) 0:00:32.894 ******* 2026-02-09 05:39:34.223605 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-09 05:39:34.223616 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-09 05:39:34.223627 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-09 05:39:34.223638 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-09 05:39:34.223649 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-09 05:39:34.223660 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-09 05:39:34.223672 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 05:39:34.223689 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 05:39:34.223708 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 05:39:34.223731 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 05:39:34.223742 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 05:39:34.223792 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-09 05:39:34.223805 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-09 05:39:34.223829 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-09 05:39:34.223840 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-09 05:39:34.223851 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-09 05:39:34.223861 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-09 05:39:34.223872 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-09 05:39:34.223884 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 05:39:34.223904 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 05:39:34.223922 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 05:39:34.223942 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 05:39:34.223962 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 05:39:34.223981 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-09 05:39:34.223996 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 05:39:34.224006 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 05:39:34.224017 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 05:39:34.224028 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 05:39:34.224071 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 05:39:34.224084 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-09 05:39:34.224095 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 05:39:34.224106 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 05:39:34.224116 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 05:39:34.224127 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 05:39:34.224144 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 05:39:34.224155 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-09 05:39:34.224165 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-09 05:39:34.224176 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-09 05:39:34.224187 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-09 05:39:34.224206 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-09 05:39:34.224216 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-09 05:39:34.224227 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-09 05:39:34.224239 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-09 05:39:34.224257 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-09 05:39:34.224268 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-09 05:39:34.224278 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-09 05:39:34.224289 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-09 05:39:34.224309 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-09 05:42:22.681128 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-09 05:42:22.681304 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-09 05:42:22.681315 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-09 05:42:22.681323 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-09 05:42:22.681331 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-09 05:42:22.681338 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-09 05:42:22.681344 | orchestrator | 2026-02-09 05:42:22.681352 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 05:42:22.681359 | orchestrator | Monday 09 February 2026 05:39:30 +0000 (0:00:21.031) 0:00:53.925 ******* 2026-02-09 05:42:22.681368 | orchestrator | 2026-02-09 05:42:22.681412 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 05:42:22.681421 | orchestrator | Monday 09 February 2026 05:39:31 +0000 (0:00:00.464) 0:00:54.390 ******* 2026-02-09 05:42:22.681427 | orchestrator | 2026-02-09 05:42:22.681433 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 05:42:22.681440 | orchestrator | Monday 09 February 2026 05:39:31 +0000 (0:00:00.486) 0:00:54.877 ******* 2026-02-09 05:42:22.681447 | orchestrator | 2026-02-09 05:42:22.681453 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 05:42:22.681460 | orchestrator | Monday 09 February 2026 05:39:32 +0000 (0:00:00.461) 0:00:55.338 ******* 2026-02-09 05:42:22.681467 | orchestrator | 2026-02-09 05:42:22.681474 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 05:42:22.681481 | orchestrator | Monday 09 February 2026 05:39:32 +0000 (0:00:00.493) 0:00:55.832 ******* 2026-02-09 05:42:22.681487 | orchestrator | 2026-02-09 05:42:22.681494 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-09 05:42:22.681500 | orchestrator | Monday 09 February 2026 05:39:33 +0000 (0:00:00.461) 0:00:56.293 ******* 2026-02-09 05:42:22.681510 | orchestrator | 2026-02-09 05:42:22.681521 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-09 05:42:22.681560 | orchestrator | Monday 09 February 2026 05:39:34 +0000 (0:00:00.805) 0:00:57.099 ******* 2026-02-09 05:42:22.681572 | orchestrator | 2026-02-09 05:42:22.681582 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-02-09 05:42:22.681593 | orchestrator | changed: [testbed-node-3] 2026-02-09 05:42:22.681604 | orchestrator | changed: [testbed-node-5] 2026-02-09 05:42:22.681614 | orchestrator | changed: [testbed-node-4] 2026-02-09 05:42:22.681624 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:42:22.681634 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:42:22.681644 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:42:22.681654 | orchestrator | 2026-02-09 05:42:22.681665 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-09 05:42:22.681675 | orchestrator | 2026-02-09 05:42:22.681686 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-09 05:42:22.681717 | orchestrator | Monday 09 February 2026 05:41:45 +0000 (0:02:11.674) 0:03:08.773 ******* 2026-02-09 05:42:22.681728 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:42:22.681740 | orchestrator | 2026-02-09 05:42:22.681751 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-09 05:42:22.681763 | orchestrator | Monday 09 February 2026 05:41:47 +0000 (0:00:02.009) 0:03:10.782 ******* 2026-02-09 05:42:22.681775 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-09 05:42:22.681786 | orchestrator | 2026-02-09 05:42:22.681798 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-09 05:42:22.681809 | orchestrator | Monday 09 February 2026 05:41:49 +0000 (0:00:01.906) 0:03:12.689 ******* 2026-02-09 05:42:22.681820 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.681833 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.681844 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.681855 | orchestrator | 2026-02-09 05:42:22.681866 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-09 05:42:22.681878 | orchestrator | Monday 09 February 2026 05:41:51 +0000 (0:00:01.866) 0:03:14.556 ******* 2026-02-09 05:42:22.681889 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.681900 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.681912 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.681925 | orchestrator | 2026-02-09 05:42:22.681939 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-09 05:42:22.681953 | orchestrator | Monday 09 February 2026 05:41:53 +0000 (0:00:01.517) 0:03:16.074 ******* 2026-02-09 05:42:22.681966 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.681979 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.681992 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.682005 | orchestrator | 2026-02-09 05:42:22.682082 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-09 05:42:22.682097 | orchestrator | Monday 09 February 2026 05:41:54 +0000 (0:00:01.379) 0:03:17.453 ******* 2026-02-09 05:42:22.682110 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.682122 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.682134 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.682146 | orchestrator | 2026-02-09 05:42:22.682159 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-09 05:42:22.682171 | orchestrator | Monday 09 February 2026 05:41:56 +0000 (0:00:01.625) 0:03:19.078 ******* 2026-02-09 05:42:22.682183 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.682256 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.682273 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.682286 | orchestrator | 2026-02-09 05:42:22.682299 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-09 05:42:22.682311 | orchestrator | Monday 09 February 2026 05:41:57 +0000 (0:00:01.495) 0:03:20.574 ******* 2026-02-09 05:42:22.682323 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:42:22.682334 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:42:22.682360 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:42:22.682372 | orchestrator | 2026-02-09 05:42:22.682383 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-09 05:42:22.682409 | orchestrator | Monday 09 February 2026 05:41:58 +0000 (0:00:01.357) 0:03:21.932 ******* 2026-02-09 05:42:22.682421 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.682433 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.682445 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.682457 | orchestrator | 2026-02-09 05:42:22.682469 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-09 05:42:22.682481 | orchestrator | Monday 09 February 2026 05:42:00 +0000 (0:00:01.794) 0:03:23.726 ******* 2026-02-09 05:42:22.682493 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.682504 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.682515 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.682528 | orchestrator | 2026-02-09 05:42:22.682540 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-09 05:42:22.682552 | orchestrator | Monday 09 February 2026 05:42:02 +0000 (0:00:01.608) 0:03:25.335 ******* 2026-02-09 05:42:22.682563 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.682575 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.682587 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.682600 | orchestrator | 2026-02-09 05:42:22.682612 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-09 05:42:22.682624 | orchestrator | Monday 09 February 2026 05:42:04 +0000 (0:00:01.931) 0:03:27.267 ******* 2026-02-09 05:42:22.682636 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.682648 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.682660 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.682672 | orchestrator | 2026-02-09 05:42:22.682683 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-09 05:42:22.682695 | orchestrator | Monday 09 February 2026 05:42:05 +0000 (0:00:01.423) 0:03:28.691 ******* 2026-02-09 05:42:22.682707 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:42:22.682719 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:42:22.682731 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:42:22.682743 | orchestrator | 2026-02-09 05:42:22.682755 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-09 05:42:22.682767 | orchestrator | Monday 09 February 2026 05:42:07 +0000 (0:00:01.436) 0:03:30.127 ******* 2026-02-09 05:42:22.682779 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:42:22.682791 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:42:22.682803 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:42:22.682815 | orchestrator | 2026-02-09 05:42:22.682827 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-09 05:42:22.682838 | orchestrator | Monday 09 February 2026 05:42:08 +0000 (0:00:01.370) 0:03:31.498 ******* 2026-02-09 05:42:22.682849 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.682860 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.682870 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.682882 | orchestrator | 2026-02-09 05:42:22.682894 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-09 05:42:22.682907 | orchestrator | Monday 09 February 2026 05:42:10 +0000 (0:00:01.826) 0:03:33.324 ******* 2026-02-09 05:42:22.682918 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.682941 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.682953 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.682964 | orchestrator | 2026-02-09 05:42:22.682976 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-09 05:42:22.682988 | orchestrator | Monday 09 February 2026 05:42:11 +0000 (0:00:01.416) 0:03:34.741 ******* 2026-02-09 05:42:22.683000 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.683011 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.683023 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.683034 | orchestrator | 2026-02-09 05:42:22.683047 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-09 05:42:22.683070 | orchestrator | Monday 09 February 2026 05:42:13 +0000 (0:00:01.829) 0:03:36.571 ******* 2026-02-09 05:42:22.683082 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:42:22.683094 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:42:22.683105 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:42:22.683117 | orchestrator | 2026-02-09 05:42:22.683129 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-09 05:42:22.683141 | orchestrator | Monday 09 February 2026 05:42:15 +0000 (0:00:01.455) 0:03:38.026 ******* 2026-02-09 05:42:22.683153 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:42:22.683166 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:42:22.683177 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:42:22.683189 | orchestrator | 2026-02-09 05:42:22.683226 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-09 05:42:22.683239 | orchestrator | Monday 09 February 2026 05:42:16 +0000 (0:00:01.558) 0:03:39.585 ******* 2026-02-09 05:42:22.683251 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:42:22.683263 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:42:22.683275 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:42:22.683287 | orchestrator | 2026-02-09 05:42:22.683299 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-09 05:42:22.683311 | orchestrator | Monday 09 February 2026 05:42:18 +0000 (0:00:01.728) 0:03:41.313 ******* 2026-02-09 05:42:22.683340 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890461 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890600 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890617 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890630 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890691 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:28.890718 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890749 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:28.890773 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:28.890805 | orchestrator | 2026-02-09 05:42:28.890819 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-09 05:42:28.890832 | orchestrator | Monday 09 February 2026 05:42:22 +0000 (0:00:04.284) 0:03:45.598 ******* 2026-02-09 05:42:28.890849 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890861 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890873 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890884 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:28.890904 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:44.088830 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:44.088935 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:44.088968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:44.088991 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:44.089000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:44.089008 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:44.089017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:44.089025 | orchestrator | 2026-02-09 05:42:44.089035 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-09 05:42:44.089045 | orchestrator | Monday 09 February 2026 05:42:28 +0000 (0:00:06.214) 0:03:51.813 ******* 2026-02-09 05:42:44.089054 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-09 05:42:44.089068 | orchestrator | 2026-02-09 05:42:44.089082 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-09 05:42:44.089097 | orchestrator | Monday 09 February 2026 05:42:30 +0000 (0:00:01.986) 0:03:53.800 ******* 2026-02-09 05:42:44.089111 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:42:44.089125 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:42:44.089156 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:42:44.089171 | orchestrator | 2026-02-09 05:42:44.089185 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-09 05:42:44.089200 | orchestrator | Monday 09 February 2026 05:42:32 +0000 (0:00:02.094) 0:03:55.895 ******* 2026-02-09 05:42:44.089213 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:42:44.089251 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:42:44.089270 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:42:44.089278 | orchestrator | 2026-02-09 05:42:44.089285 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-09 05:42:44.089293 | orchestrator | Monday 09 February 2026 05:42:35 +0000 (0:00:02.761) 0:03:58.657 ******* 2026-02-09 05:42:44.089301 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:42:44.089308 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:42:44.089316 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:42:44.089324 | orchestrator | 2026-02-09 05:42:44.089331 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-09 05:42:44.089339 | orchestrator | Monday 09 February 2026 05:42:38 +0000 (0:00:02.864) 0:04:01.521 ******* 2026-02-09 05:42:44.089349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:44.089366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:44.089376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:44.089387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:44.089398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:44.089407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:44.089431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:49.139271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:49.139432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:42:49.139466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139479 | orchestrator | 2026-02-09 05:42:49.139492 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-09 05:42:49.139505 | orchestrator | Monday 09 February 2026 05:42:44 +0000 (0:00:05.474) 0:04:06.996 ******* 2026-02-09 05:42:49.139517 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:42:49.139529 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:42:49.139540 | orchestrator | } 2026-02-09 05:42:49.139552 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:42:49.139563 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:42:49.139574 | orchestrator | } 2026-02-09 05:42:49.139584 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:42:49.139595 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:42:49.139631 | orchestrator | } 2026-02-09 05:42:49.139642 | orchestrator | 2026-02-09 05:42:49.139653 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-09 05:42:49.139665 | orchestrator | Monday 09 February 2026 05:42:45 +0000 (0:00:01.453) 0:04:08.449 ******* 2026-02-09 05:42:49.139677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-09 05:42:49.139838 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-09 05:44:23.388910 | orchestrator | 2026-02-09 05:44:23.389029 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-09 05:44:23.389066 | orchestrator | Monday 09 February 2026 05:42:49 +0000 (0:00:03.607) 0:04:12.056 ******* 2026-02-09 05:44:23.389080 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-09 05:44:23.389092 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-09 05:44:23.389103 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-09 05:44:23.389114 | orchestrator | 2026-02-09 05:44:23.389125 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-09 05:44:23.389137 | orchestrator | Monday 09 February 2026 05:42:51 +0000 (0:00:02.262) 0:04:14.319 ******* 2026-02-09 05:44:23.389148 | orchestrator | changed: [testbed-node-0] => { 2026-02-09 05:44:23.389160 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:44:23.389172 | orchestrator | } 2026-02-09 05:44:23.389183 | orchestrator | changed: [testbed-node-1] => { 2026-02-09 05:44:23.389194 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:44:23.389206 | orchestrator | } 2026-02-09 05:44:23.389217 | orchestrator | changed: [testbed-node-2] => { 2026-02-09 05:44:23.389228 | orchestrator |  "msg": "Notifying handlers" 2026-02-09 05:44:23.389239 | orchestrator | } 2026-02-09 05:44:23.389250 | orchestrator | 2026-02-09 05:44:23.389277 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-09 05:44:23.389289 | orchestrator | Monday 09 February 2026 05:42:52 +0000 (0:00:01.473) 0:04:15.792 ******* 2026-02-09 05:44:23.389361 | orchestrator | 2026-02-09 05:44:23.389373 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-09 05:44:23.389384 | orchestrator | Monday 09 February 2026 05:42:53 +0000 (0:00:00.480) 0:04:16.273 ******* 2026-02-09 05:44:23.389395 | orchestrator | 2026-02-09 05:44:23.389406 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-09 05:44:23.389417 | orchestrator | Monday 09 February 2026 05:42:53 +0000 (0:00:00.444) 0:04:16.717 ******* 2026-02-09 05:44:23.389461 | orchestrator | 2026-02-09 05:44:23.389475 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-09 05:44:23.389487 | orchestrator | Monday 09 February 2026 05:42:54 +0000 (0:00:00.809) 0:04:17.526 ******* 2026-02-09 05:44:23.389500 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:44:23.389513 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:44:23.389525 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:44:23.389537 | orchestrator | 2026-02-09 05:44:23.389550 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-09 05:44:23.389562 | orchestrator | Monday 09 February 2026 05:43:12 +0000 (0:00:17.606) 0:04:35.132 ******* 2026-02-09 05:44:23.389573 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:44:23.389584 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:44:23.389594 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:44:23.389605 | orchestrator | 2026-02-09 05:44:23.389616 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-09 05:44:23.389627 | orchestrator | Monday 09 February 2026 05:43:29 +0000 (0:00:16.982) 0:04:52.115 ******* 2026-02-09 05:44:23.389638 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-09 05:44:23.389649 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-09 05:44:23.389660 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-09 05:44:23.389670 | orchestrator | 2026-02-09 05:44:23.389682 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-09 05:44:23.389693 | orchestrator | Monday 09 February 2026 05:43:45 +0000 (0:00:15.841) 0:05:07.956 ******* 2026-02-09 05:44:23.389704 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:44:23.389715 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:44:23.389725 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:44:23.389736 | orchestrator | 2026-02-09 05:44:23.389747 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-09 05:44:23.389758 | orchestrator | Monday 09 February 2026 05:44:02 +0000 (0:00:17.415) 0:05:25.372 ******* 2026-02-09 05:44:23.389769 | orchestrator | Pausing for 5 seconds 2026-02-09 05:44:23.389780 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:44:23.389791 | orchestrator | 2026-02-09 05:44:23.389802 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-09 05:44:23.389813 | orchestrator | Monday 09 February 2026 05:44:08 +0000 (0:00:06.237) 0:05:31.610 ******* 2026-02-09 05:44:23.389823 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:44:23.389834 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:44:23.389845 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:44:23.389855 | orchestrator | 2026-02-09 05:44:23.389866 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-09 05:44:23.389877 | orchestrator | Monday 09 February 2026 05:44:10 +0000 (0:00:01.823) 0:05:33.433 ******* 2026-02-09 05:44:23.389888 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:44:23.389899 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:44:23.389910 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:44:23.389920 | orchestrator | 2026-02-09 05:44:23.389931 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-09 05:44:23.389942 | orchestrator | Monday 09 February 2026 05:44:12 +0000 (0:00:01.708) 0:05:35.142 ******* 2026-02-09 05:44:23.389952 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:44:23.389963 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:44:23.389974 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:44:23.389985 | orchestrator | 2026-02-09 05:44:23.389995 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-09 05:44:23.390006 | orchestrator | Monday 09 February 2026 05:44:14 +0000 (0:00:01.863) 0:05:37.005 ******* 2026-02-09 05:44:23.390070 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:44:23.390084 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:44:23.390095 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:44:23.390106 | orchestrator | 2026-02-09 05:44:23.390117 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-09 05:44:23.390136 | orchestrator | Monday 09 February 2026 05:44:15 +0000 (0:00:01.803) 0:05:38.809 ******* 2026-02-09 05:44:23.390147 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:44:23.390158 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:44:23.390169 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:44:23.390179 | orchestrator | 2026-02-09 05:44:23.390190 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-09 05:44:23.390220 | orchestrator | Monday 09 February 2026 05:44:17 +0000 (0:00:01.810) 0:05:40.620 ******* 2026-02-09 05:44:23.390232 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:44:23.390243 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:44:23.390253 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:44:23.390264 | orchestrator | 2026-02-09 05:44:23.390275 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-09 05:44:23.390286 | orchestrator | Monday 09 February 2026 05:44:19 +0000 (0:00:01.817) 0:05:42.437 ******* 2026-02-09 05:44:23.390320 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-09 05:44:23.390331 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-09 05:44:23.390342 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-09 05:44:23.390353 | orchestrator | 2026-02-09 05:44:23.390364 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 05:44:23.390376 | orchestrator | testbed-node-0 : ok=50  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-09 05:44:23.390394 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-09 05:44:23.390405 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-09 05:44:23.390417 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 05:44:23.390427 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 05:44:23.390438 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 05:44:23.390449 | orchestrator | 2026-02-09 05:44:23.390460 | orchestrator | 2026-02-09 05:44:23.390471 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 05:44:23.390482 | orchestrator | Monday 09 February 2026 05:44:22 +0000 (0:00:03.294) 0:05:45.732 ******* 2026-02-09 05:44:23.390493 | orchestrator | =============================================================================== 2026-02-09 05:44:23.390504 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.67s 2026-02-09 05:44:23.390514 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.03s 2026-02-09 05:44:23.390525 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 17.61s 2026-02-09 05:44:23.390536 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.42s 2026-02-09 05:44:23.390547 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.98s 2026-02-09 05:44:23.390557 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 15.84s 2026-02-09 05:44:23.390568 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.24s 2026-02-09 05:44:23.390579 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.21s 2026-02-09 05:44:23.390589 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.47s 2026-02-09 05:44:23.390600 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.28s 2026-02-09 05:44:23.390611 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 4.01s 2026-02-09 05:44:23.390622 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.64s 2026-02-09 05:44:23.390639 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.61s 2026-02-09 05:44:23.390650 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 3.29s 2026-02-09 05:44:23.390661 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.22s 2026-02-09 05:44:23.390672 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.17s 2026-02-09 05:44:23.390683 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.86s 2026-02-09 05:44:23.390694 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.76s 2026-02-09 05:44:23.390704 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.70s 2026-02-09 05:44:23.390716 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.69s 2026-02-09 05:44:23.895647 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-09 05:44:23.895769 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-09 05:44:23.895797 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-09 05:44:23.904603 | orchestrator | + set -e 2026-02-09 05:44:23.904662 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-09 05:44:23.904676 | orchestrator | ++ export INTERACTIVE=false 2026-02-09 05:44:23.904687 | orchestrator | ++ INTERACTIVE=false 2026-02-09 05:44:23.904717 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-09 05:44:23.904728 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-09 05:44:23.904739 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-09 05:44:26.068286 | orchestrator | 2026-02-09 05:44:26 | INFO  | Task da74af77-f282-4620-96b5-623635455f52 (ceph-rolling_update) was prepared for execution. 2026-02-09 05:44:26.068448 | orchestrator | 2026-02-09 05:44:26 | INFO  | It takes a moment until task da74af77-f282-4620-96b5-623635455f52 (ceph-rolling_update) has been started and output is visible here. 2026-02-09 05:45:29.448108 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-09 05:45:29.448254 | orchestrator | 2.16.14 2026-02-09 05:45:29.448280 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-09 05:45:29.448301 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-09 05:45:29.448340 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-09 05:45:29.448450 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-09 05:45:29.448492 | orchestrator | 2026-02-09 05:45:29.448512 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-09 05:45:29.448532 | orchestrator | 2026-02-09 05:45:29.448552 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-09 05:45:29.448572 | orchestrator | Monday 09 February 2026 05:44:34 +0000 (0:00:01.124) 0:00:01.124 ******* 2026-02-09 05:45:29.448614 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-09 05:45:29.448637 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-09 05:45:29.448659 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-09 05:45:29.448682 | orchestrator | skipping: [localhost] 2026-02-09 05:45:29.448704 | orchestrator | 2026-02-09 05:45:29.448723 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-09 05:45:29.448743 | orchestrator | 2026-02-09 05:45:29.448764 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-09 05:45:29.448784 | orchestrator | Monday 09 February 2026 05:44:35 +0000 (0:00:00.937) 0:00:02.061 ******* 2026-02-09 05:45:29.448804 | orchestrator | ok: [testbed-node-0] => { 2026-02-09 05:45:29.448824 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 05:45:29.448879 | orchestrator | } 2026-02-09 05:45:29.448900 | orchestrator | ok: [testbed-node-1] => { 2026-02-09 05:45:29.448920 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 05:45:29.448938 | orchestrator | } 2026-02-09 05:45:29.448959 | orchestrator | ok: [testbed-node-2] => { 2026-02-09 05:45:29.448979 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 05:45:29.448999 | orchestrator | } 2026-02-09 05:45:29.449019 | orchestrator | ok: [testbed-node-3] => { 2026-02-09 05:45:29.449039 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 05:45:29.449058 | orchestrator | } 2026-02-09 05:45:29.449078 | orchestrator | ok: [testbed-node-4] => { 2026-02-09 05:45:29.449098 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 05:45:29.449119 | orchestrator | } 2026-02-09 05:45:29.449137 | orchestrator | ok: [testbed-node-5] => { 2026-02-09 05:45:29.449155 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 05:45:29.449173 | orchestrator | } 2026-02-09 05:45:29.449191 | orchestrator | ok: [testbed-manager] => { 2026-02-09 05:45:29.449209 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 05:45:29.449226 | orchestrator | } 2026-02-09 05:45:29.449244 | orchestrator | 2026-02-09 05:45:29.449261 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-09 05:45:29.449279 | orchestrator | Monday 09 February 2026 05:44:37 +0000 (0:00:01.921) 0:00:03.983 ******* 2026-02-09 05:45:29.449296 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:29.449315 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:45:29.449333 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:45:29.449384 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:45:29.449403 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:45:29.449420 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:45:29.449439 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:29.449457 | orchestrator | 2026-02-09 05:45:29.449477 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-09 05:45:29.449497 | orchestrator | Monday 09 February 2026 05:44:41 +0000 (0:00:04.506) 0:00:08.490 ******* 2026-02-09 05:45:29.449517 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:45:29.449537 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 05:45:29.449557 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 05:45:29.449573 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 05:45:29.449588 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-09 05:45:29.449604 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 05:45:29.449620 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 05:45:29.449639 | orchestrator | 2026-02-09 05:45:29.449657 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-09 05:45:29.449675 | orchestrator | Monday 09 February 2026 05:45:15 +0000 (0:00:33.202) 0:00:41.692 ******* 2026-02-09 05:45:29.449691 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:29.449708 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:29.449727 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:29.449745 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:29.449764 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:29.449782 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:29.449801 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:29.449820 | orchestrator | 2026-02-09 05:45:29.449833 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-09 05:45:29.449844 | orchestrator | Monday 09 February 2026 05:45:16 +0000 (0:00:01.042) 0:00:42.735 ******* 2026-02-09 05:45:29.449897 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-09 05:45:29.449910 | orchestrator | 2026-02-09 05:45:29.449921 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-09 05:45:29.449932 | orchestrator | Monday 09 February 2026 05:45:18 +0000 (0:00:02.113) 0:00:44.848 ******* 2026-02-09 05:45:29.449943 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:29.449954 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:29.449964 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:29.449975 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:29.449986 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:29.449996 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:29.450007 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:29.450086 | orchestrator | 2026-02-09 05:45:29.450101 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-09 05:45:29.450112 | orchestrator | Monday 09 February 2026 05:45:19 +0000 (0:00:01.427) 0:00:46.276 ******* 2026-02-09 05:45:29.450123 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:29.450133 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:29.450144 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:29.450155 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:29.450165 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:29.450176 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:29.450186 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:29.450197 | orchestrator | 2026-02-09 05:45:29.450218 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-09 05:45:29.450229 | orchestrator | Monday 09 February 2026 05:45:20 +0000 (0:00:00.768) 0:00:47.044 ******* 2026-02-09 05:45:29.450240 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:29.450251 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:29.450262 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:29.450272 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:29.450283 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:29.450293 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:29.450304 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:29.450315 | orchestrator | 2026-02-09 05:45:29.450325 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-09 05:45:29.450336 | orchestrator | Monday 09 February 2026 05:45:21 +0000 (0:00:01.584) 0:00:48.628 ******* 2026-02-09 05:45:29.450376 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:29.450395 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:29.450412 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:29.450427 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:29.450444 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:29.450463 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:29.450482 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:29.450499 | orchestrator | 2026-02-09 05:45:29.450516 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-09 05:45:29.450527 | orchestrator | Monday 09 February 2026 05:45:22 +0000 (0:00:00.863) 0:00:49.491 ******* 2026-02-09 05:45:29.450538 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:29.450548 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:29.450559 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:29.450569 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:29.450580 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:29.450590 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:29.450601 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:29.450612 | orchestrator | 2026-02-09 05:45:29.450622 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-09 05:45:29.450633 | orchestrator | Monday 09 February 2026 05:45:24 +0000 (0:00:01.226) 0:00:50.718 ******* 2026-02-09 05:45:29.450643 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:29.450655 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:29.450674 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:29.450687 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:29.450708 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:29.450719 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:29.450729 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:29.450740 | orchestrator | 2026-02-09 05:45:29.450751 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-09 05:45:29.450761 | orchestrator | Monday 09 February 2026 05:45:24 +0000 (0:00:00.856) 0:00:51.574 ******* 2026-02-09 05:45:29.450772 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:29.450783 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:45:29.450794 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:45:29.450804 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:45:29.450815 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:45:29.450826 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:45:29.450836 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:45:29.450847 | orchestrator | 2026-02-09 05:45:29.450857 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-09 05:45:29.450868 | orchestrator | Monday 09 February 2026 05:45:26 +0000 (0:00:01.247) 0:00:52.822 ******* 2026-02-09 05:45:29.450879 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:29.450889 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:29.450900 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:29.450910 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:29.450921 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:29.450931 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:29.450942 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:29.450953 | orchestrator | 2026-02-09 05:45:29.450964 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-09 05:45:29.450975 | orchestrator | Monday 09 February 2026 05:45:27 +0000 (0:00:00.868) 0:00:53.690 ******* 2026-02-09 05:45:29.450985 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:45:29.450996 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 05:45:29.451007 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 05:45:29.451017 | orchestrator | 2026-02-09 05:45:29.451028 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-09 05:45:29.451039 | orchestrator | Monday 09 February 2026 05:45:28 +0000 (0:00:01.359) 0:00:55.050 ******* 2026-02-09 05:45:29.451049 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:29.451060 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:29.451070 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:29.451081 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:29.451091 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:29.451101 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:29.451112 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:29.451123 | orchestrator | 2026-02-09 05:45:29.451133 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-09 05:45:29.451155 | orchestrator | Monday 09 February 2026 05:45:29 +0000 (0:00:01.023) 0:00:56.074 ******* 2026-02-09 05:45:41.288875 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:45:41.288991 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 05:45:41.289009 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 05:45:41.289022 | orchestrator | 2026-02-09 05:45:41.289033 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-09 05:45:41.289045 | orchestrator | Monday 09 February 2026 05:45:31 +0000 (0:00:02.422) 0:00:58.496 ******* 2026-02-09 05:45:41.289057 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 05:45:41.289068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 05:45:41.289079 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 05:45:41.289090 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:41.289102 | orchestrator | 2026-02-09 05:45:41.289113 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-09 05:45:41.289146 | orchestrator | Monday 09 February 2026 05:45:32 +0000 (0:00:00.436) 0:00:58.933 ******* 2026-02-09 05:45:41.289175 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-09 05:45:41.289190 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-09 05:45:41.289201 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-09 05:45:41.289212 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:41.289223 | orchestrator | 2026-02-09 05:45:41.289234 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-09 05:45:41.289245 | orchestrator | Monday 09 February 2026 05:45:33 +0000 (0:00:00.907) 0:00:59.841 ******* 2026-02-09 05:45:41.289258 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:41.289273 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:41.289284 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:41.289296 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:41.289306 | orchestrator | 2026-02-09 05:45:41.289317 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-09 05:45:41.289328 | orchestrator | Monday 09 February 2026 05:45:33 +0000 (0:00:00.166) 0:01:00.007 ******* 2026-02-09 05:45:41.289341 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a495b1786f93', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-09 05:45:30.119525', 'end': '2026-02-09 05:45:30.174845', 'delta': '0:00:00.055320', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a495b1786f93'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-09 05:45:41.289406 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'ab15bd6989cf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-09 05:45:30.734230', 'end': '2026-02-09 05:45:30.794164', 'delta': '0:00:00.059934', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ab15bd6989cf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-09 05:45:41.289437 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '08d9b4f0b230', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-09 05:45:31.627136', 'end': '2026-02-09 05:45:31.689634', 'delta': '0:00:00.062498', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['08d9b4f0b230'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-09 05:45:41.289451 | orchestrator | 2026-02-09 05:45:41.289465 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-09 05:45:41.289478 | orchestrator | Monday 09 February 2026 05:45:33 +0000 (0:00:00.211) 0:01:00.219 ******* 2026-02-09 05:45:41.289491 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:41.289503 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:41.289515 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:41.289528 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:41.289540 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:41.289553 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:41.289566 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:41.289579 | orchestrator | 2026-02-09 05:45:41.289591 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-09 05:45:41.289603 | orchestrator | Monday 09 February 2026 05:45:34 +0000 (0:00:01.253) 0:01:01.473 ******* 2026-02-09 05:45:41.289615 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:41.289628 | orchestrator | 2026-02-09 05:45:41.289641 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-09 05:45:41.289653 | orchestrator | Monday 09 February 2026 05:45:35 +0000 (0:00:00.252) 0:01:01.725 ******* 2026-02-09 05:45:41.289666 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:41.289679 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:41.289691 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:41.289703 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:41.289716 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:41.289728 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:41.289741 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:41.289753 | orchestrator | 2026-02-09 05:45:41.289765 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-09 05:45:41.289777 | orchestrator | Monday 09 February 2026 05:45:36 +0000 (0:00:01.026) 0:01:02.752 ******* 2026-02-09 05:45:41.289788 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:41.289799 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-09 05:45:41.289810 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-09 05:45:41.289820 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-09 05:45:41.289831 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-09 05:45:41.289842 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-09 05:45:41.289853 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-09 05:45:41.289864 | orchestrator | 2026-02-09 05:45:41.289875 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 05:45:41.289885 | orchestrator | Monday 09 February 2026 05:45:38 +0000 (0:00:02.329) 0:01:05.081 ******* 2026-02-09 05:45:41.289896 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:41.289907 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:41.289925 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:41.289936 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:41.289947 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:41.289957 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:41.289968 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:41.289979 | orchestrator | 2026-02-09 05:45:41.289989 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-09 05:45:41.290000 | orchestrator | Monday 09 February 2026 05:45:39 +0000 (0:00:01.043) 0:01:06.124 ******* 2026-02-09 05:45:41.290011 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:41.290084 | orchestrator | 2026-02-09 05:45:41.290096 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-09 05:45:41.290107 | orchestrator | Monday 09 February 2026 05:45:39 +0000 (0:00:00.148) 0:01:06.273 ******* 2026-02-09 05:45:41.290117 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:41.290128 | orchestrator | 2026-02-09 05:45:41.290139 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 05:45:41.290149 | orchestrator | Monday 09 February 2026 05:45:39 +0000 (0:00:00.244) 0:01:06.517 ******* 2026-02-09 05:45:41.290160 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:41.290171 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:45:41.290182 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:45:41.290192 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:45:41.290203 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:45:41.290222 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:45:47.004672 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:45:47.004779 | orchestrator | 2026-02-09 05:45:47.004797 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-09 05:45:47.004810 | orchestrator | Monday 09 February 2026 05:45:41 +0000 (0:00:01.401) 0:01:07.918 ******* 2026-02-09 05:45:47.004828 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:47.004849 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:45:47.004868 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:45:47.004887 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:45:47.004906 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:45:47.004924 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:45:47.004942 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:45:47.004961 | orchestrator | 2026-02-09 05:45:47.004978 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-09 05:45:47.004995 | orchestrator | Monday 09 February 2026 05:45:42 +0000 (0:00:00.766) 0:01:08.685 ******* 2026-02-09 05:45:47.005012 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:47.005030 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:45:47.005047 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:45:47.005097 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:45:47.005117 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:45:47.005135 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:45:47.005152 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:45:47.005169 | orchestrator | 2026-02-09 05:45:47.005187 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-09 05:45:47.005208 | orchestrator | Monday 09 February 2026 05:45:43 +0000 (0:00:01.067) 0:01:09.752 ******* 2026-02-09 05:45:47.005227 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:47.005247 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:45:47.005266 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:45:47.005284 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:45:47.005302 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:45:47.005320 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:45:47.005335 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:45:47.005351 | orchestrator | 2026-02-09 05:45:47.005401 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-09 05:45:47.005421 | orchestrator | Monday 09 February 2026 05:45:43 +0000 (0:00:00.749) 0:01:10.502 ******* 2026-02-09 05:45:47.005469 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:47.005488 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:45:47.005503 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:45:47.005520 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:45:47.005537 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:45:47.005555 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:45:47.005573 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:45:47.005590 | orchestrator | 2026-02-09 05:45:47.005607 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-09 05:45:47.005624 | orchestrator | Monday 09 February 2026 05:45:44 +0000 (0:00:01.064) 0:01:11.566 ******* 2026-02-09 05:45:47.005642 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:47.005660 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:45:47.005678 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:45:47.005760 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:45:47.005784 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:45:47.005803 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:45:47.005821 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:45:47.005838 | orchestrator | 2026-02-09 05:45:47.005856 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-09 05:45:47.005877 | orchestrator | Monday 09 February 2026 05:45:45 +0000 (0:00:00.773) 0:01:12.340 ******* 2026-02-09 05:45:47.005896 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:47.005915 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:45:47.005934 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:45:47.005952 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:45:47.005972 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:45:47.005990 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:45:47.006010 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:45:47.006180 | orchestrator | 2026-02-09 05:45:47.006203 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-09 05:45:47.006221 | orchestrator | Monday 09 February 2026 05:45:46 +0000 (0:00:01.020) 0:01:13.360 ******* 2026-02-09 05:45:47.006245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.006268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.006288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.006337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-54-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 05:45:47.006418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.006443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.006464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.006489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e53c6ccf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 05:45:47.006528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.162984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 05:45:47.163221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '05884397', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 05:45:47.163325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163354 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:47.163426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.163489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 05:45:47.345486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.345588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.345603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.345616 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:45:47.345633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '669d190d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 05:45:47.345697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.345729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.345748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.345761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d', 'dm-uuid-LVM-5Oms0YhgvCVrWp80wJ4aA96yxcElodY708xUFI15dbkcdnHIR6L7mBfIOccNLzlf'], 'uuids': ['92075616-c4e2-4925-8a52-781f81959675'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04e8f271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf']}})  2026-02-09 05:45:47.345774 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa', 'scsi-SQEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '96ef4066', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 05:45:47.345787 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UekHwl-BrrL-tQwo-R3UW-N6L4-qGv4-ixmNDb', 'scsi-0QEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d', 'scsi-SQEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6e78f5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b']}})  2026-02-09 05:45:47.345799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.345818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.345837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 05:45:47.582734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.582807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ', 'dm-uuid-CRYPT-LUKS2-de5e11f498514777b9f5c3124a9d07d1-0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 05:45:47.582816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.582822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b', 'dm-uuid-LVM-0WjeRAA0lqf3cpEn6bug4xs5UGMazLjB0h01y39wS0A1Owicu3DkC9MW8cY3xQUQ'], 'uuids': ['de5e11f4-9851-4777-b9f5-c3124a9d07d1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6e78f5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ']}})  2026-02-09 05:45:47.582829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-DXOpal-X33W-ipPf-IHHU-xTym-5svh-1uUmz7', 'scsi-0QEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4', 'scsi-SQEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04e8f271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d']}})  2026-02-09 05:45:47.582849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.582854 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:45:47.582879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62fae712', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 05:45:47.582886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.582892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.582898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf', 'dm-uuid-CRYPT-LUKS2-92075616c4e249258a52781f81959675-08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 05:45:47.582911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.582920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9', 'dm-uuid-LVM-3CHn6ZP2pM8HpEDxSzeilwVQRF6lfj6OM8VSybDQwMAeXi61wvDItRKk6IUvThlx'], 'uuids': ['50c6b72f-c737-47f8-b44b-c4ff80acfe27'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca63f30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx']}})  2026-02-09 05:45:47.582938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24', 'scsi-SQEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'accd83ee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 05:45:47.696626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GwhUsL-bhJV-LTOj-ZPeb-I83T-YRPV-54WlOk', 'scsi-0QEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509', 'scsi-SQEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '31e706da', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3']}})  2026-02-09 05:45:47.696727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.696745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.696759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 05:45:47.696796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.696808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm', 'dm-uuid-CRYPT-LUKS2-6090f72cb53d48828358e477240bcd4c-qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 05:45:47.696821 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:45:47.696834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.696878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3', 'dm-uuid-LVM-28EU5fYWgLFVVTr1j10NPpT02LXZ3m2dqNBTokCpiFfT2ODyZTZ76Gse0HWZzEjm'], 'uuids': ['6090f72c-b53d-4882-8358-e477240bcd4c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '31e706da', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm']}})  2026-02-09 05:45:47.696892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-TEtRPa-KFlO-eA6E-SkhX-jKKT-2BmX-PRBRTw', 'scsi-0QEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c', 'scsi-SQEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca63f30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9']}})  2026-02-09 05:45:47.696904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.696918 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9ffd840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 05:45:47.696951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.835190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.835295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx', 'dm-uuid-CRYPT-LUKS2-50c6b72fc73747f8b44bc4ff80acfe27-M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 05:45:47.835319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.835336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6', 'dm-uuid-LVM-UtcmtJOb91d0iC1jVKeu7Rh960XYKnyIcb9DX8DrOUkJ6Npc5MMds8BTnO00gFXN'], 'uuids': ['edbf4323-e023-483f-8845-3d4d18b95c7e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1815f4db', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN']}})  2026-02-09 05:45:47.835438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0', 'scsi-SQEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b185251', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 05:45:47.835453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jvH3Zw-djyF-WIKe-T88H-f7IR-FEUt-vCkV4E', 'scsi-0QEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717', 'scsi-SQEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad4d2000', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92']}})  2026-02-09 05:45:47.835482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.835509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.835519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 05:45:47.835529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.835554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0', 'dm-uuid-CRYPT-LUKS2-82e0402e2657452988ce543ce32f645b-p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 05:45:47.835571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:47.835586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92', 'dm-uuid-LVM-SZPyknUsbhfLaF3x5K31ctP0vcigu1Pwp97ku36NfSW31vos0Gj86u7MmrIxN6I0'], 'uuids': ['82e0402e-2657-4529-88ce-543ce32f645b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad4d2000', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0']}})  2026-02-09 05:45:47.835607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-nj2fwl-jxqG-fYtS-q2di-jVVW-fVes-RibCJ0', 'scsi-0QEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46', 'scsi-SQEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1815f4db', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6']}})  2026-02-09 05:45:47.835632 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:48.157973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f810d870', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 05:45:48.158155 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:48.158176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:48.158203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN', 'dm-uuid-CRYPT-LUKS2-edbf4323e023483f88453d4d18b95c7e-cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 05:45:48.158218 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:45:48.158232 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:45:48.158244 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:48.158274 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:48.158287 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:48.158306 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-25-19-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 05:45:48.158318 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:48.158330 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:48.158341 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:48.158419 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '07b5cadf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 05:45:48.575660 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:48.575769 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:45:48.575786 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:45:48.575799 | orchestrator | 2026-02-09 05:45:48.575812 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-09 05:45:48.575824 | orchestrator | Monday 09 February 2026 05:45:48 +0000 (0:00:01.430) 0:01:14.791 ******* 2026-02-09 05:45:48.575837 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.575851 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.575881 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.575894 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-54-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.575958 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.575980 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.575999 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.576031 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e53c6ccf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.576079 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.823346 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.823501 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.823517 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.823528 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.823557 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.823590 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.823620 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.823631 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.823651 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '05884397', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.823679 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:45:48.823691 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:48.823710 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.011167 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.011268 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.011283 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.011313 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.011346 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.011410 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.011448 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.011472 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '669d190d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.011494 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.011506 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.011518 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:45:49.011540 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.128173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d', 'dm-uuid-LVM-5Oms0YhgvCVrWp80wJ4aA96yxcElodY708xUFI15dbkcdnHIR6L7mBfIOccNLzlf'], 'uuids': ['92075616-c4e2-4925-8a52-781f81959675'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04e8f271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.128293 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa', 'scsi-SQEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '96ef4066', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.128333 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UekHwl-BrrL-tQwo-R3UW-N6L4-qGv4-ixmNDb', 'scsi-0QEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d', 'scsi-SQEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6e78f5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.128352 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.128499 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:45:49.128519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.128553 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.128566 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.128597 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ', 'dm-uuid-CRYPT-LUKS2-de5e11f498514777b9f5c3124a9d07d1-0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.128609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.128621 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b', 'dm-uuid-LVM-0WjeRAA0lqf3cpEn6bug4xs5UGMazLjB0h01y39wS0A1Owicu3DkC9MW8cY3xQUQ'], 'uuids': ['de5e11f4-9851-4777-b9f5-c3124a9d07d1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6e78f5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.128641 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-DXOpal-X33W-ipPf-IHHU-xTym-5svh-1uUmz7', 'scsi-0QEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4', 'scsi-SQEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04e8f271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.162885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.163014 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62fae712', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.163072 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.163113 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.163127 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9', 'dm-uuid-LVM-3CHn6ZP2pM8HpEDxSzeilwVQRF6lfj6OM8VSybDQwMAeXi61wvDItRKk6IUvThlx'], 'uuids': ['50c6b72f-c737-47f8-b44b-c4ff80acfe27'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca63f30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.163152 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.163165 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24', 'scsi-SQEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'accd83ee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.163178 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf', 'dm-uuid-CRYPT-LUKS2-92075616c4e249258a52781f81959675-08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.163196 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GwhUsL-bhJV-LTOj-ZPeb-I83T-YRPV-54WlOk', 'scsi-0QEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509', 'scsi-SQEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '31e706da', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304738 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:45:49.304778 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304796 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304809 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm', 'dm-uuid-CRYPT-LUKS2-6090f72cb53d48828358e477240bcd4c-qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304838 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304871 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3', 'dm-uuid-LVM-28EU5fYWgLFVVTr1j10NPpT02LXZ3m2dqNBTokCpiFfT2ODyZTZ76Gse0HWZzEjm'], 'uuids': ['6090f72c-b53d-4882-8358-e477240bcd4c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '31e706da', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304909 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-TEtRPa-KFlO-eA6E-SkhX-jKKT-2BmX-PRBRTw', 'scsi-0QEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c', 'scsi-SQEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca63f30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304919 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6', 'dm-uuid-LVM-UtcmtJOb91d0iC1jVKeu7Rh960XYKnyIcb9DX8DrOUkJ6Npc5MMds8BTnO00gFXN'], 'uuids': ['edbf4323-e023-483f-8845-3d4d18b95c7e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1815f4db', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.304941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0', 'scsi-SQEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b185251', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.390950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9ffd840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.391055 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.391080 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jvH3Zw-djyF-WIKe-T88H-f7IR-FEUt-vCkV4E', 'scsi-0QEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717', 'scsi-SQEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad4d2000', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.391166 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.391195 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.391217 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx', 'dm-uuid-CRYPT-LUKS2-50c6b72fc73747f8b44bc4ff80acfe27-M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.391237 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.391250 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.391261 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.391300 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0', 'dm-uuid-CRYPT-LUKS2-82e0402e2657452988ce543ce32f645b-p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.487049 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:45:49.487173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.487196 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92', 'dm-uuid-LVM-SZPyknUsbhfLaF3x5K31ctP0vcigu1Pwp97ku36NfSW31vos0Gj86u7MmrIxN6I0'], 'uuids': ['82e0402e-2657-4529-88ce-543ce32f645b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad4d2000', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.487211 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-nj2fwl-jxqG-fYtS-q2di-jVVW-fVes-RibCJ0', 'scsi-0QEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46', 'scsi-SQEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1815f4db', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6']}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.487226 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.487300 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f810d870', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.487315 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.487327 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.487347 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.487439 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:49.487482 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:53.136254 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN', 'dm-uuid-CRYPT-LUKS2-edbf4323e023483f88453d4d18b95c7e-cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:53.136425 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-25-19-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:53.136448 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:45:53.136463 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:53.136499 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:53.136511 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:53.136565 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '07b5cadf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:53.136580 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:53.136600 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:45:53.136611 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:45:53.136622 | orchestrator | 2026-02-09 05:45:53.136634 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-09 05:45:53.136647 | orchestrator | Monday 09 February 2026 05:45:49 +0000 (0:00:01.481) 0:01:16.272 ******* 2026-02-09 05:45:53.136657 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:53.136668 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:53.136679 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:53.136689 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:53.136699 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:53.136709 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:53.136720 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:53.136730 | orchestrator | 2026-02-09 05:45:53.136772 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-09 05:45:53.136786 | orchestrator | Monday 09 February 2026 05:45:51 +0000 (0:00:01.519) 0:01:17.791 ******* 2026-02-09 05:45:53.136799 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:53.136812 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:53.136825 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:53.136837 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:53.136864 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:53.136877 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:45:53.136890 | orchestrator | ok: [testbed-manager] 2026-02-09 05:45:53.136901 | orchestrator | 2026-02-09 05:45:53.136912 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 05:45:53.136923 | orchestrator | Monday 09 February 2026 05:45:51 +0000 (0:00:00.736) 0:01:18.528 ******* 2026-02-09 05:45:53.136933 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:45:53.136944 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:45:53.136954 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:45:53.136965 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:45:53.136975 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:45:53.136986 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:45:53.137004 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:46:05.652047 | orchestrator | 2026-02-09 05:46:05.652173 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 05:46:05.652192 | orchestrator | Monday 09 February 2026 05:45:53 +0000 (0:00:01.240) 0:01:19.768 ******* 2026-02-09 05:46:05.652203 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:05.652215 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:05.652225 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:05.652235 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:05.652245 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:05.652255 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:05.652266 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:05.652276 | orchestrator | 2026-02-09 05:46:05.652287 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 05:46:05.652331 | orchestrator | Monday 09 February 2026 05:45:53 +0000 (0:00:00.718) 0:01:20.487 ******* 2026-02-09 05:46:05.652343 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:05.652350 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:05.652358 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:05.652364 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:05.652426 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:05.652436 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:05.652442 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-09 05:46:05.652449 | orchestrator | 2026-02-09 05:46:05.652455 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 05:46:05.652462 | orchestrator | Monday 09 February 2026 05:45:55 +0000 (0:00:01.583) 0:01:22.070 ******* 2026-02-09 05:46:05.652468 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:05.652474 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:05.652480 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:05.652486 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:05.652493 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:05.652499 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:05.652505 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:05.652511 | orchestrator | 2026-02-09 05:46:05.652517 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-09 05:46:05.652524 | orchestrator | Monday 09 February 2026 05:45:56 +0000 (0:00:00.794) 0:01:22.864 ******* 2026-02-09 05:46:05.652530 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:46:05.652537 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-09 05:46:05.652543 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-09 05:46:05.652550 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-09 05:46:05.652556 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-09 05:46:05.652562 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-09 05:46:05.652568 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-09 05:46:05.652574 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-09 05:46:05.652582 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-09 05:46:05.652590 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-09 05:46:05.652597 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-09 05:46:05.652604 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-09 05:46:05.652611 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-09 05:46:05.652618 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-09 05:46:05.652626 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-09 05:46:05.652633 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-09 05:46:05.652640 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-09 05:46:05.652648 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-09 05:46:05.652655 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-09 05:46:05.652662 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-09 05:46:05.652669 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-09 05:46:05.652677 | orchestrator | 2026-02-09 05:46:05.652684 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-09 05:46:05.652691 | orchestrator | Monday 09 February 2026 05:45:58 +0000 (0:00:01.857) 0:01:24.722 ******* 2026-02-09 05:46:05.652699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 05:46:05.652707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 05:46:05.652714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 05:46:05.652722 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:05.652730 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-09 05:46:05.652737 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-09 05:46:05.652750 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-09 05:46:05.652756 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:05.652762 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-09 05:46:05.652769 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-09 05:46:05.652775 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-09 05:46:05.652781 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:05.652787 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-09 05:46:05.652809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-09 05:46:05.652815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-09 05:46:05.652821 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:05.652828 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-09 05:46:05.652834 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-09 05:46:05.652840 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-09 05:46:05.652846 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:05.652852 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-09 05:46:05.652875 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-09 05:46:05.652881 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-09 05:46:05.652887 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:05.652894 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-09 05:46:05.652900 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-09 05:46:05.652906 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-09 05:46:05.652912 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:05.652918 | orchestrator | 2026-02-09 05:46:05.652924 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-09 05:46:05.652931 | orchestrator | Monday 09 February 2026 05:45:59 +0000 (0:00:01.071) 0:01:25.794 ******* 2026-02-09 05:46:05.652937 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:05.652943 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:05.652949 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:05.652955 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:05.652962 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 05:46:05.652969 | orchestrator | 2026-02-09 05:46:05.652975 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-09 05:46:05.652983 | orchestrator | Monday 09 February 2026 05:46:00 +0000 (0:00:00.987) 0:01:26.782 ******* 2026-02-09 05:46:05.652989 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:05.652995 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:05.653001 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:05.653007 | orchestrator | 2026-02-09 05:46:05.653014 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-09 05:46:05.653020 | orchestrator | Monday 09 February 2026 05:46:00 +0000 (0:00:00.587) 0:01:27.369 ******* 2026-02-09 05:46:05.653026 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:05.653032 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:05.653038 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:05.653044 | orchestrator | 2026-02-09 05:46:05.653050 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-09 05:46:05.653057 | orchestrator | Monday 09 February 2026 05:46:01 +0000 (0:00:00.377) 0:01:27.746 ******* 2026-02-09 05:46:05.653063 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:05.653069 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:05.653075 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:05.653081 | orchestrator | 2026-02-09 05:46:05.653087 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-09 05:46:05.653100 | orchestrator | Monday 09 February 2026 05:46:01 +0000 (0:00:00.345) 0:01:28.092 ******* 2026-02-09 05:46:05.653106 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:46:05.653113 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:46:05.653119 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:46:05.653125 | orchestrator | 2026-02-09 05:46:05.653131 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-09 05:46:05.653137 | orchestrator | Monday 09 February 2026 05:46:01 +0000 (0:00:00.411) 0:01:28.504 ******* 2026-02-09 05:46:05.653143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 05:46:05.653149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 05:46:05.653156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 05:46:05.653162 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:05.653168 | orchestrator | 2026-02-09 05:46:05.653174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-09 05:46:05.653180 | orchestrator | Monday 09 February 2026 05:46:02 +0000 (0:00:00.427) 0:01:28.931 ******* 2026-02-09 05:46:05.653186 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 05:46:05.653192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 05:46:05.653198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 05:46:05.653205 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:05.653211 | orchestrator | 2026-02-09 05:46:05.653217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-09 05:46:05.653223 | orchestrator | Monday 09 February 2026 05:46:02 +0000 (0:00:00.703) 0:01:29.635 ******* 2026-02-09 05:46:05.653229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 05:46:05.653235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 05:46:05.653241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 05:46:05.653247 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:05.653254 | orchestrator | 2026-02-09 05:46:05.653260 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-09 05:46:05.653266 | orchestrator | Monday 09 February 2026 05:46:03 +0000 (0:00:00.671) 0:01:30.307 ******* 2026-02-09 05:46:05.653272 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:46:05.653278 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:46:05.653284 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:46:05.653290 | orchestrator | 2026-02-09 05:46:05.653297 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-09 05:46:05.653303 | orchestrator | Monday 09 February 2026 05:46:04 +0000 (0:00:00.605) 0:01:30.913 ******* 2026-02-09 05:46:05.653309 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-09 05:46:05.653319 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-09 05:46:05.653325 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-09 05:46:05.653331 | orchestrator | 2026-02-09 05:46:05.653337 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-09 05:46:05.653344 | orchestrator | Monday 09 February 2026 05:46:04 +0000 (0:00:00.545) 0:01:31.459 ******* 2026-02-09 05:46:05.653350 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:46:05.653356 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 05:46:05.653363 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 05:46:05.653393 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-09 05:46:32.382948 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 05:46:32.383089 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 05:46:32.383112 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 05:46:32.383161 | orchestrator | 2026-02-09 05:46:32.383180 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-09 05:46:32.383198 | orchestrator | Monday 09 February 2026 05:46:05 +0000 (0:00:00.819) 0:01:32.279 ******* 2026-02-09 05:46:32.383214 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:46:32.383231 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 05:46:32.383247 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 05:46:32.383262 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-09 05:46:32.383277 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 05:46:32.383293 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 05:46:32.383309 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 05:46:32.383326 | orchestrator | 2026-02-09 05:46:32.383343 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-09 05:46:32.383360 | orchestrator | Monday 09 February 2026 05:46:07 +0000 (0:00:02.273) 0:01:34.552 ******* 2026-02-09 05:46:32.383378 | orchestrator | changed: [testbed-node-3] 2026-02-09 05:46:32.383441 | orchestrator | changed: [testbed-node-4] 2026-02-09 05:46:32.383461 | orchestrator | changed: [testbed-node-5] 2026-02-09 05:46:32.383480 | orchestrator | changed: [testbed-manager] 2026-02-09 05:46:32.383499 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:46:32.383520 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:46:32.383539 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:46:32.383559 | orchestrator | 2026-02-09 05:46:32.383575 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-09 05:46:32.383592 | orchestrator | Monday 09 February 2026 05:46:15 +0000 (0:00:07.290) 0:01:41.842 ******* 2026-02-09 05:46:32.383610 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.383630 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.383648 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.383667 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.383687 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.383746 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.383768 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.383786 | orchestrator | 2026-02-09 05:46:32.383807 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-09 05:46:32.383825 | orchestrator | Monday 09 February 2026 05:46:16 +0000 (0:00:01.056) 0:01:42.899 ******* 2026-02-09 05:46:32.383841 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.383856 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.383873 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.383888 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.383903 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.383919 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.383934 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.383949 | orchestrator | 2026-02-09 05:46:32.383966 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-09 05:46:32.383982 | orchestrator | Monday 09 February 2026 05:46:16 +0000 (0:00:00.731) 0:01:43.631 ******* 2026-02-09 05:46:32.383997 | orchestrator | changed: [testbed-node-1] 2026-02-09 05:46:32.384013 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.384030 | orchestrator | changed: [testbed-node-2] 2026-02-09 05:46:32.384045 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:46:32.384060 | orchestrator | changed: [testbed-node-3] 2026-02-09 05:46:32.384076 | orchestrator | changed: [testbed-node-4] 2026-02-09 05:46:32.384092 | orchestrator | changed: [testbed-node-5] 2026-02-09 05:46:32.384108 | orchestrator | 2026-02-09 05:46:32.384126 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-09 05:46:32.384163 | orchestrator | Monday 09 February 2026 05:46:19 +0000 (0:00:02.442) 0:01:46.074 ******* 2026-02-09 05:46:32.384180 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-09 05:46:32.384196 | orchestrator | 2026-02-09 05:46:32.384210 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-09 05:46:32.384224 | orchestrator | Monday 09 February 2026 05:46:21 +0000 (0:00:01.926) 0:01:48.001 ******* 2026-02-09 05:46:32.384241 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.384257 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.384273 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.384289 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.384305 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.384320 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.384337 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.384354 | orchestrator | 2026-02-09 05:46:32.384371 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-09 05:46:32.384427 | orchestrator | Monday 09 February 2026 05:46:22 +0000 (0:00:01.027) 0:01:49.029 ******* 2026-02-09 05:46:32.384456 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.384466 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.384476 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.384486 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.384496 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.384506 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.384516 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.384526 | orchestrator | 2026-02-09 05:46:32.384537 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-09 05:46:32.384572 | orchestrator | Monday 09 February 2026 05:46:23 +0000 (0:00:01.059) 0:01:50.088 ******* 2026-02-09 05:46:32.384583 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.384593 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.384603 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.384613 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.384623 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.384634 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.384643 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.384654 | orchestrator | 2026-02-09 05:46:32.384664 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-09 05:46:32.384674 | orchestrator | Monday 09 February 2026 05:46:24 +0000 (0:00:00.764) 0:01:50.853 ******* 2026-02-09 05:46:32.384684 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.384694 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.384704 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.384715 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.384725 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.384735 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.384745 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.384813 | orchestrator | 2026-02-09 05:46:32.384825 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-09 05:46:32.384835 | orchestrator | Monday 09 February 2026 05:46:25 +0000 (0:00:01.108) 0:01:51.961 ******* 2026-02-09 05:46:32.384846 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.384856 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.384866 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.384876 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.384886 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.384897 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.384907 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.384917 | orchestrator | 2026-02-09 05:46:32.384928 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-09 05:46:32.384938 | orchestrator | Monday 09 February 2026 05:46:26 +0000 (0:00:00.778) 0:01:52.740 ******* 2026-02-09 05:46:32.384960 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.384971 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.384981 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.384991 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.385002 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.385012 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.385022 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.385032 | orchestrator | 2026-02-09 05:46:32.385043 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-09 05:46:32.385054 | orchestrator | Monday 09 February 2026 05:46:27 +0000 (0:00:00.996) 0:01:53.737 ******* 2026-02-09 05:46:32.385064 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.385074 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.385084 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.385094 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.385105 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.385115 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.385125 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.385135 | orchestrator | 2026-02-09 05:46:32.385145 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-09 05:46:32.385156 | orchestrator | Monday 09 February 2026 05:46:27 +0000 (0:00:00.773) 0:01:54.510 ******* 2026-02-09 05:46:32.385166 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.385176 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.385186 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.385197 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.385207 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.385217 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.385227 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.385237 | orchestrator | 2026-02-09 05:46:32.385248 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-09 05:46:32.385258 | orchestrator | Monday 09 February 2026 05:46:28 +0000 (0:00:01.063) 0:01:55.574 ******* 2026-02-09 05:46:32.385268 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.385279 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.385289 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.385299 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.385309 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.385320 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.385330 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.385340 | orchestrator | 2026-02-09 05:46:32.385533 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-09 05:46:32.385572 | orchestrator | Monday 09 February 2026 05:46:29 +0000 (0:00:01.006) 0:01:56.580 ******* 2026-02-09 05:46:32.385582 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.385592 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.385601 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.385611 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.385621 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.385630 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.385640 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.385650 | orchestrator | 2026-02-09 05:46:32.385660 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-09 05:46:32.385669 | orchestrator | Monday 09 February 2026 05:46:30 +0000 (0:00:00.793) 0:01:57.373 ******* 2026-02-09 05:46:32.385679 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.385694 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.385704 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.385714 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.385724 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:32.385733 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:32.385754 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:32.385764 | orchestrator | 2026-02-09 05:46:32.385773 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-09 05:46:32.385783 | orchestrator | Monday 09 February 2026 05:46:31 +0000 (0:00:00.989) 0:01:58.363 ******* 2026-02-09 05:46:32.385793 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:32.385802 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:32.385812 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:32.385821 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:32.385845 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:41.855957 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:41.856070 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:41.856086 | orchestrator | 2026-02-09 05:46:41.856099 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-09 05:46:41.856111 | orchestrator | Monday 09 February 2026 05:46:32 +0000 (0:00:00.773) 0:01:59.137 ******* 2026-02-09 05:46:41.856122 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:41.856133 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:41.856203 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:41.856217 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 05:46:41.856230 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 05:46:41.856241 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:41.856252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 05:46:41.856264 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 05:46:41.856274 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:41.856285 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 05:46:41.856296 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 05:46:41.856307 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:41.856318 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:41.856329 | orchestrator | 2026-02-09 05:46:41.856340 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-09 05:46:41.856352 | orchestrator | Monday 09 February 2026 05:46:33 +0000 (0:00:01.031) 0:02:00.168 ******* 2026-02-09 05:46:41.856363 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:41.856373 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:41.856384 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:41.856460 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:41.856476 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:41.856487 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:41.856500 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:41.856513 | orchestrator | 2026-02-09 05:46:41.856525 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-09 05:46:41.856539 | orchestrator | Monday 09 February 2026 05:46:34 +0000 (0:00:00.765) 0:02:00.933 ******* 2026-02-09 05:46:41.856551 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:41.856563 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:41.856575 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:41.856588 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:41.856601 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:41.856612 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:41.856622 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:41.856661 | orchestrator | 2026-02-09 05:46:41.856673 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-09 05:46:41.856684 | orchestrator | Monday 09 February 2026 05:46:35 +0000 (0:00:01.047) 0:02:01.980 ******* 2026-02-09 05:46:41.856694 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:41.856705 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:41.856716 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:41.856726 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:41.856737 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:41.856747 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:41.856758 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:41.856769 | orchestrator | 2026-02-09 05:46:41.856780 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-09 05:46:41.856791 | orchestrator | Monday 09 February 2026 05:46:36 +0000 (0:00:00.760) 0:02:02.741 ******* 2026-02-09 05:46:41.856801 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:41.856812 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:41.856823 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:41.856833 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:41.856844 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:41.856855 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:41.856865 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:41.856876 | orchestrator | 2026-02-09 05:46:41.856887 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-09 05:46:41.856898 | orchestrator | Monday 09 February 2026 05:46:37 +0000 (0:00:01.054) 0:02:03.795 ******* 2026-02-09 05:46:41.856909 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:41.856920 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:41.856930 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:41.856956 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:41.856967 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:41.856978 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:41.856988 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:41.856999 | orchestrator | 2026-02-09 05:46:41.857010 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-09 05:46:41.857021 | orchestrator | Monday 09 February 2026 05:46:38 +0000 (0:00:01.002) 0:02:04.798 ******* 2026-02-09 05:46:41.857032 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:41.857042 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:41.857053 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:41.857064 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:41.857075 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:41.857085 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:41.857096 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:41.857107 | orchestrator | 2026-02-09 05:46:41.857137 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-09 05:46:41.857149 | orchestrator | Monday 09 February 2026 05:46:38 +0000 (0:00:00.721) 0:02:05.520 ******* 2026-02-09 05:46:41.857160 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:41.857171 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:41.857182 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:41.857192 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:41.857203 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 05:46:41.857214 | orchestrator | 2026-02-09 05:46:41.857225 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-09 05:46:41.857236 | orchestrator | Monday 09 February 2026 05:46:40 +0000 (0:00:01.566) 0:02:07.087 ******* 2026-02-09 05:46:41.857247 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:46:41.857257 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:46:41.857268 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:46:41.857279 | orchestrator | 2026-02-09 05:46:41.857289 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-09 05:46:41.857310 | orchestrator | Monday 09 February 2026 05:46:40 +0000 (0:00:00.374) 0:02:07.461 ******* 2026-02-09 05:46:41.857321 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 05:46:41.857332 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 05:46:41.857343 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:41.857354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 05:46:41.857365 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 05:46:41.857376 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:41.857386 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 05:46:41.857421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 05:46:41.857433 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:41.857444 | orchestrator | 2026-02-09 05:46:41.857455 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-09 05:46:41.857466 | orchestrator | Monday 09 February 2026 05:46:41 +0000 (0:00:00.421) 0:02:07.882 ******* 2026-02-09 05:46:41.857479 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:41.857492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:41.857503 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:41.857515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:41.857526 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:41.857536 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:41.857553 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:41.857573 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:44.904269 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:44.904467 | orchestrator | 2026-02-09 05:46:44.904491 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-09 05:46:44.904505 | orchestrator | Monday 09 February 2026 05:46:41 +0000 (0:00:00.608) 0:02:08.490 ******* 2026-02-09 05:46:44.904517 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:44.904529 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:44.904540 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:44.904551 | orchestrator | 2026-02-09 05:46:44.904562 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-09 05:46:44.904573 | orchestrator | Monday 09 February 2026 05:46:42 +0000 (0:00:00.338) 0:02:08.829 ******* 2026-02-09 05:46:44.904584 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:44.904594 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:44.904605 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:44.904616 | orchestrator | 2026-02-09 05:46:44.904626 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-09 05:46:44.904637 | orchestrator | Monday 09 February 2026 05:46:42 +0000 (0:00:00.332) 0:02:09.161 ******* 2026-02-09 05:46:44.904648 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:44.904659 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:44.904669 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:44.904680 | orchestrator | 2026-02-09 05:46:44.904691 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-09 05:46:44.904701 | orchestrator | Monday 09 February 2026 05:46:42 +0000 (0:00:00.292) 0:02:09.454 ******* 2026-02-09 05:46:44.904712 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:44.904723 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:44.904733 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:44.904744 | orchestrator | 2026-02-09 05:46:44.904755 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-09 05:46:44.904765 | orchestrator | Monday 09 February 2026 05:46:43 +0000 (0:00:00.311) 0:02:09.766 ******* 2026-02-09 05:46:44.904776 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}) 2026-02-09 05:46:44.904789 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}) 2026-02-09 05:46:44.904800 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}) 2026-02-09 05:46:44.904813 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}) 2026-02-09 05:46:44.904826 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}) 2026-02-09 05:46:44.904839 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}) 2026-02-09 05:46:44.904851 | orchestrator | 2026-02-09 05:46:44.904864 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-09 05:46:44.904878 | orchestrator | Monday 09 February 2026 05:46:44 +0000 (0:00:01.335) 0:02:11.101 ******* 2026-02-09 05:46:44.904913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-709cc28b-6adb-555a-83e9-344e81441f7b/osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1770608567.5506058, 'mtime': 1770608567.5466056, 'ctime': 1770608567.5466056, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-709cc28b-6adb-555a-83e9-344e81441f7b/osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:44.904961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-244f969e-c6c5-5568-af21-d52fe589178d/osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1770608586.4218886, 'mtime': 1770608586.4168885, 'ctime': 1770608586.4168885, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-244f969e-c6c5-5568-af21-d52fe589178d/osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:44.904978 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:44.904993 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3/osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1770608570.2717092, 'mtime': 1770608570.2657092, 'ctime': 1770608570.2657092, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3/osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:44.905014 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9/osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1770608588.2039764, 'mtime': 1770608588.1989765, 'ctime': 1770608588.1989765, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9/osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:44.905035 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:44.905060 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-46be6a4f-1579-5910-a72e-9190b5238c92/osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1770608567.6176636, 'mtime': 1770608567.6126635, 'ctime': 1770608567.6126635, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-46be6a4f-1579-5910-a72e-9190b5238c92/osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.746001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-fca1079b-480c-5ada-8652-888828a580b6/osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1770608587.8749707, 'mtime': 1770608587.8699706, 'ctime': 1770608587.8699706, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-fca1079b-480c-5ada-8652-888828a580b6/osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.746202 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:46.746235 | orchestrator | 2026-02-09 05:46:46.746249 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-09 05:46:46.746263 | orchestrator | Monday 09 February 2026 05:46:44 +0000 (0:00:00.436) 0:02:11.537 ******* 2026-02-09 05:46:46.746275 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 05:46:46.746288 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 05:46:46.746327 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:46.746339 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 05:46:46.746349 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 05:46:46.746360 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:46.746371 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 05:46:46.746462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 05:46:46.746478 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:46.746493 | orchestrator | 2026-02-09 05:46:46.746513 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-09 05:46:46.746535 | orchestrator | Monday 09 February 2026 05:46:45 +0000 (0:00:00.437) 0:02:11.975 ******* 2026-02-09 05:46:46.746556 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.746580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.746601 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:46.746622 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.746670 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.746692 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:46.746713 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.746735 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.746754 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:46.746772 | orchestrator | 2026-02-09 05:46:46.746792 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-09 05:46:46.746812 | orchestrator | Monday 09 February 2026 05:46:45 +0000 (0:00:00.421) 0:02:12.396 ******* 2026-02-09 05:46:46.746832 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 05:46:46.746869 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 05:46:46.746890 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:46.746910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 05:46:46.746929 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 05:46:46.746949 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:46.746961 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 05:46:46.746972 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 05:46:46.746983 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:46.746994 | orchestrator | 2026-02-09 05:46:46.747004 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-09 05:46:46.747015 | orchestrator | Monday 09 February 2026 05:46:46 +0000 (0:00:00.601) 0:02:12.997 ******* 2026-02-09 05:46:46.747026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.747046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.747057 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:46.747068 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.747079 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.747090 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:46.747101 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:46.747123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}, 'ansible_loop_var': 'item'})  2026-02-09 05:46:51.103641 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:51.103773 | orchestrator | 2026-02-09 05:46:51.103790 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-09 05:46:51.103804 | orchestrator | Monday 09 February 2026 05:46:46 +0000 (0:00:00.381) 0:02:13.379 ******* 2026-02-09 05:46:51.103815 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:51.103855 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:51.103867 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:51.103878 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:51.103889 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:51.103899 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:51.103910 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:51.103921 | orchestrator | 2026-02-09 05:46:51.103932 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-09 05:46:51.103943 | orchestrator | Monday 09 February 2026 05:46:47 +0000 (0:00:00.774) 0:02:14.154 ******* 2026-02-09 05:46:51.103954 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:51.103965 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:51.103975 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:51.103986 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:51.103997 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 05:46:51.104009 | orchestrator | 2026-02-09 05:46:51.104019 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-09 05:46:51.104030 | orchestrator | Monday 09 February 2026 05:46:49 +0000 (0:00:01.571) 0:02:15.725 ******* 2026-02-09 05:46:51.104042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104055 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104161 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104173 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:51.104183 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:51.104194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104248 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:51.104259 | orchestrator | 2026-02-09 05:46:51.104270 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-09 05:46:51.104289 | orchestrator | Monday 09 February 2026 05:46:49 +0000 (0:00:00.436) 0:02:16.162 ******* 2026-02-09 05:46:51.104300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104375 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:51.104385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104466 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:51.104477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104531 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:51.104542 | orchestrator | 2026-02-09 05:46:51.104552 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-09 05:46:51.104564 | orchestrator | Monday 09 February 2026 05:46:50 +0000 (0:00:00.707) 0:02:16.870 ******* 2026-02-09 05:46:51.104575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104629 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:51.104646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104664 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104707 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:51.104718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 05:46:51.104772 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:51.104783 | orchestrator | 2026-02-09 05:46:51.104794 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-09 05:46:51.104805 | orchestrator | Monday 09 February 2026 05:46:50 +0000 (0:00:00.474) 0:02:17.344 ******* 2026-02-09 05:46:51.104816 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:51.104826 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:51.104844 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:58.224761 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:58.224893 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:58.224916 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:58.224932 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:58.224948 | orchestrator | 2026-02-09 05:46:58.224965 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-09 05:46:58.224983 | orchestrator | Monday 09 February 2026 05:46:51 +0000 (0:00:00.754) 0:02:18.099 ******* 2026-02-09 05:46:58.225000 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:58.225016 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:58.225026 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:58.225036 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:58.225045 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:58.225055 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:58.225064 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:58.225074 | orchestrator | 2026-02-09 05:46:58.225083 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-09 05:46:58.225093 | orchestrator | Monday 09 February 2026 05:46:52 +0000 (0:00:01.097) 0:02:19.197 ******* 2026-02-09 05:46:58.225103 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:58.225112 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:58.225121 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:58.225131 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:58.225140 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:58.225149 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:58.225159 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:58.225169 | orchestrator | 2026-02-09 05:46:58.225178 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-09 05:46:58.225189 | orchestrator | Monday 09 February 2026 05:46:53 +0000 (0:00:00.724) 0:02:19.921 ******* 2026-02-09 05:46:58.225225 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:58.225235 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:58.225245 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:58.225255 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:58.225264 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:58.225301 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:58.225313 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:58.225324 | orchestrator | 2026-02-09 05:46:58.225336 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-09 05:46:58.225349 | orchestrator | Monday 09 February 2026 05:46:54 +0000 (0:00:01.023) 0:02:20.944 ******* 2026-02-09 05:46:58.225360 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:58.225371 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:58.225387 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:58.225404 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:58.225443 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:58.225459 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:58.225475 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:58.225490 | orchestrator | 2026-02-09 05:46:58.225506 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-09 05:46:58.225521 | orchestrator | Monday 09 February 2026 05:46:55 +0000 (0:00:01.051) 0:02:21.996 ******* 2026-02-09 05:46:58.225537 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:58.225553 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:58.225569 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:58.225584 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:58.225600 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:58.225615 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:58.225630 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:58.225646 | orchestrator | 2026-02-09 05:46:58.225679 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-09 05:46:58.225690 | orchestrator | Monday 09 February 2026 05:46:56 +0000 (0:00:00.802) 0:02:22.799 ******* 2026-02-09 05:46:58.225700 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:58.225709 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:58.225720 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:58.225729 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:58.225739 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:58.225748 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:58.225757 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:58.225767 | orchestrator | 2026-02-09 05:46:58.225776 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-09 05:46:58.225786 | orchestrator | Monday 09 February 2026 05:46:57 +0000 (0:00:01.064) 0:02:23.863 ******* 2026-02-09 05:46:58.225797 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:58.225809 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:58.225821 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:46:58.225833 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:46:58.225843 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:46:58.225855 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:46:58.225876 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:58.225905 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:58.225916 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:58.225925 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:46:58.225935 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:46:58.225945 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:46:58.225955 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:46:58.225964 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:58.225974 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:58.225984 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:58.225993 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:46:58.226003 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:46:58.226012 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:46:58.226078 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:46:58.226089 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:58.226104 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:58.226114 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:58.226124 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:46:58.226134 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:58.226144 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:46:58.226153 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:58.226171 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:46:58.226236 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:46:58.226249 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:46:58.226268 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:46:59.912360 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:59.912524 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:59.912552 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:46:59.912566 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:59.912578 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:59.912598 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:59.912611 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:46:59.912623 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:46:59.912634 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:46:59.912645 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:46:59.912656 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:59.912666 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:46:59.912677 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:46:59.912688 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:59.912704 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:46:59.912739 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:46:59.912750 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:46:59.912781 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:59.912798 | orchestrator | 2026-02-09 05:46:59.912813 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-09 05:46:59.912826 | orchestrator | Monday 09 February 2026 05:46:58 +0000 (0:00:00.997) 0:02:24.861 ******* 2026-02-09 05:46:59.912836 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:59.912847 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:59.912865 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:59.912878 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:46:59.912889 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:46:59.912901 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:46:59.912916 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:46:59.912935 | orchestrator | 2026-02-09 05:46:59.912948 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-09 05:46:59.912962 | orchestrator | Monday 09 February 2026 05:46:59 +0000 (0:00:01.053) 0:02:25.914 ******* 2026-02-09 05:46:59.912981 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:59.913000 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:59.913017 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:46:59.913058 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:46:59.913075 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:46:59.913093 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:46:59.913112 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:46:59.913131 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:59.913150 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:59.913167 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:46:59.913186 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:46:59.913205 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:46:59.913223 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:46:59.913238 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:46:59.913249 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:59.913260 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:59.913281 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:46:59.913292 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:46:59.913310 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:46:59.913321 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:46:59.913331 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:46:59.913342 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:59.913353 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:59.913364 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:46:59.913374 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:46:59.913385 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:46:59.913395 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:46:59.913437 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:46:59.913471 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:47:15.419873 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:47:15.420058 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:47:15.420081 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:47:15.420095 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:47:15.420108 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:47:15.420121 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 05:47:15.420132 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:47:15.420172 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 05:47:15.420183 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 05:47:15.420194 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:47:15.420205 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:47:15.420218 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:47:15.420229 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:47:15.420258 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:47:15.420269 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 05:47:15.420280 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:47:15.420290 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:47:15.420301 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 05:47:15.420312 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 05:47:15.420322 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:47:15.420333 | orchestrator | 2026-02-09 05:47:15.420345 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-09 05:47:15.420357 | orchestrator | Monday 09 February 2026 05:47:00 +0000 (0:00:00.899) 0:02:26.814 ******* 2026-02-09 05:47:15.420368 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:15.420378 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:47:15.420389 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:47:15.420399 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:47:15.420409 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:47:15.420446 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:47:15.420457 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:47:15.420468 | orchestrator | 2026-02-09 05:47:15.420479 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-09 05:47:15.420489 | orchestrator | Monday 09 February 2026 05:47:01 +0000 (0:00:01.026) 0:02:27.841 ******* 2026-02-09 05:47:15.420500 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:15.420510 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:47:15.420520 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:47:15.420531 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:47:15.420542 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:47:15.420552 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:47:15.420563 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:47:15.420573 | orchestrator | 2026-02-09 05:47:15.420584 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-09 05:47:15.420616 | orchestrator | Monday 09 February 2026 05:47:01 +0000 (0:00:00.739) 0:02:28.581 ******* 2026-02-09 05:47:15.420637 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:15.420648 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:47:15.420658 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:47:15.420669 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:47:15.420679 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:47:15.420690 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:47:15.420701 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:47:15.420711 | orchestrator | 2026-02-09 05:47:15.420722 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-09 05:47:15.420732 | orchestrator | Monday 09 February 2026 05:47:03 +0000 (0:00:01.716) 0:02:30.298 ******* 2026-02-09 05:47:15.420743 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-09 05:47:15.420756 | orchestrator | 2026-02-09 05:47:15.420767 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-09 05:47:15.420778 | orchestrator | Monday 09 February 2026 05:47:05 +0000 (0:00:01.894) 0:02:32.192 ******* 2026-02-09 05:47:15.420789 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 05:47:15.420800 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 05:47:15.420810 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 05:47:15.420821 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 05:47:15.420831 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 05:47:15.420842 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 05:47:15.420852 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 05:47:15.420863 | orchestrator | 2026-02-09 05:47:15.420873 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-09 05:47:15.420884 | orchestrator | Monday 09 February 2026 05:47:06 +0000 (0:00:01.024) 0:02:33.216 ******* 2026-02-09 05:47:15.420894 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:15.420905 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:47:15.420916 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:47:15.420927 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:47:15.420937 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:47:15.420948 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:47:15.420959 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:47:15.420969 | orchestrator | 2026-02-09 05:47:15.420980 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-09 05:47:15.420990 | orchestrator | Monday 09 February 2026 05:47:07 +0000 (0:00:01.069) 0:02:34.285 ******* 2026-02-09 05:47:15.421001 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:15.421012 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:47:15.421028 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:47:15.421039 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:47:15.421049 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:47:15.421059 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:47:15.421070 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:47:15.421080 | orchestrator | 2026-02-09 05:47:15.421091 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-09 05:47:15.421102 | orchestrator | Monday 09 February 2026 05:47:08 +0000 (0:00:00.784) 0:02:35.070 ******* 2026-02-09 05:47:15.421112 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:15.421124 | orchestrator | ok: [testbed-node-1] 2026-02-09 05:47:15.421135 | orchestrator | ok: [testbed-node-2] 2026-02-09 05:47:15.421145 | orchestrator | ok: [testbed-node-3] 2026-02-09 05:47:15.421156 | orchestrator | ok: [testbed-node-4] 2026-02-09 05:47:15.421176 | orchestrator | ok: [testbed-node-5] 2026-02-09 05:47:15.421186 | orchestrator | ok: [testbed-manager] 2026-02-09 05:47:15.421197 | orchestrator | 2026-02-09 05:47:15.421208 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-09 05:47:15.421218 | orchestrator | Monday 09 February 2026 05:47:09 +0000 (0:00:01.426) 0:02:36.496 ******* 2026-02-09 05:47:15.421229 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:15.421240 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:47:15.421255 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:47:15.421267 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:47:15.421277 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:47:15.421288 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:47:15.421298 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:47:15.421309 | orchestrator | 2026-02-09 05:47:15.421319 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-09 05:47:15.421330 | orchestrator | Monday 09 February 2026 05:47:11 +0000 (0:00:01.526) 0:02:38.024 ******* 2026-02-09 05:47:15.421341 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:15.421351 | orchestrator | skipping: [testbed-node-1] 2026-02-09 05:47:15.421362 | orchestrator | skipping: [testbed-node-2] 2026-02-09 05:47:15.421372 | orchestrator | skipping: [testbed-node-3] 2026-02-09 05:47:15.421382 | orchestrator | skipping: [testbed-node-4] 2026-02-09 05:47:15.421393 | orchestrator | skipping: [testbed-node-5] 2026-02-09 05:47:15.421403 | orchestrator | skipping: [testbed-manager] 2026-02-09 05:47:15.421414 | orchestrator | 2026-02-09 05:47:15.421453 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-09 05:47:15.421464 | orchestrator | Monday 09 February 2026 05:47:12 +0000 (0:00:01.558) 0:02:39.582 ******* 2026-02-09 05:47:15.421474 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:15.421485 | orchestrator | 2026-02-09 05:47:15.421496 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-09 05:47:15.421506 | orchestrator | Monday 09 February 2026 05:47:14 +0000 (0:00:01.655) 0:02:41.238 ******* 2026-02-09 05:47:15.421517 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:15.421528 | orchestrator | 2026-02-09 05:47:15.421545 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-09 05:47:33.534366 | orchestrator | 2026-02-09 05:47:33.534489 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-09 05:47:33.534504 | orchestrator | Monday 09 February 2026 05:47:15 +0000 (0:00:00.804) 0:02:42.043 ******* 2026-02-09 05:47:33.534514 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.534524 | orchestrator | 2026-02-09 05:47:33.534533 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-09 05:47:33.534542 | orchestrator | Monday 09 February 2026 05:47:15 +0000 (0:00:00.456) 0:02:42.499 ******* 2026-02-09 05:47:33.534551 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.534559 | orchestrator | 2026-02-09 05:47:33.534568 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-09 05:47:33.534577 | orchestrator | Monday 09 February 2026 05:47:16 +0000 (0:00:00.550) 0:02:43.049 ******* 2026-02-09 05:47:33.534588 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-09 05:47:33.534599 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-09 05:47:33.534609 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-09 05:47:33.534643 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-09 05:47:33.534668 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-09 05:47:33.534679 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}])  2026-02-09 05:47:33.534690 | orchestrator | 2026-02-09 05:47:33.534699 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-09 05:47:33.534708 | orchestrator | 2026-02-09 05:47:33.534717 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-09 05:47:33.534747 | orchestrator | Monday 09 February 2026 05:47:25 +0000 (0:00:09.463) 0:02:52.512 ******* 2026-02-09 05:47:33.534756 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.534764 | orchestrator | 2026-02-09 05:47:33.534773 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-09 05:47:33.534782 | orchestrator | Monday 09 February 2026 05:47:26 +0000 (0:00:00.486) 0:02:52.999 ******* 2026-02-09 05:47:33.534790 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.534799 | orchestrator | 2026-02-09 05:47:33.534808 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-09 05:47:33.534817 | orchestrator | Monday 09 February 2026 05:47:26 +0000 (0:00:00.147) 0:02:53.147 ******* 2026-02-09 05:47:33.534826 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:33.534836 | orchestrator | 2026-02-09 05:47:33.534844 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-09 05:47:33.534853 | orchestrator | Monday 09 February 2026 05:47:26 +0000 (0:00:00.149) 0:02:53.296 ******* 2026-02-09 05:47:33.534862 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.534871 | orchestrator | 2026-02-09 05:47:33.534879 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-09 05:47:33.534888 | orchestrator | Monday 09 February 2026 05:47:26 +0000 (0:00:00.149) 0:02:53.446 ******* 2026-02-09 05:47:33.534897 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-09 05:47:33.534905 | orchestrator | 2026-02-09 05:47:33.534915 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-09 05:47:33.534941 | orchestrator | Monday 09 February 2026 05:47:27 +0000 (0:00:00.252) 0:02:53.698 ******* 2026-02-09 05:47:33.534966 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.534985 | orchestrator | 2026-02-09 05:47:33.534996 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-09 05:47:33.535006 | orchestrator | Monday 09 February 2026 05:47:27 +0000 (0:00:00.448) 0:02:54.147 ******* 2026-02-09 05:47:33.535016 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.535027 | orchestrator | 2026-02-09 05:47:33.535037 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-09 05:47:33.535055 | orchestrator | Monday 09 February 2026 05:47:27 +0000 (0:00:00.157) 0:02:54.305 ******* 2026-02-09 05:47:33.535066 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.535076 | orchestrator | 2026-02-09 05:47:33.535086 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-09 05:47:33.535096 | orchestrator | Monday 09 February 2026 05:47:28 +0000 (0:00:00.477) 0:02:54.783 ******* 2026-02-09 05:47:33.535106 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.535116 | orchestrator | 2026-02-09 05:47:33.535126 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-09 05:47:33.535136 | orchestrator | Monday 09 February 2026 05:47:28 +0000 (0:00:00.399) 0:02:55.182 ******* 2026-02-09 05:47:33.535146 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.535156 | orchestrator | 2026-02-09 05:47:33.535167 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-09 05:47:33.535176 | orchestrator | Monday 09 February 2026 05:47:28 +0000 (0:00:00.156) 0:02:55.338 ******* 2026-02-09 05:47:33.535187 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.535198 | orchestrator | 2026-02-09 05:47:33.535208 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-09 05:47:33.535219 | orchestrator | Monday 09 February 2026 05:47:28 +0000 (0:00:00.168) 0:02:55.507 ******* 2026-02-09 05:47:33.535229 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:33.535239 | orchestrator | 2026-02-09 05:47:33.535249 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-09 05:47:33.535260 | orchestrator | Monday 09 February 2026 05:47:29 +0000 (0:00:00.155) 0:02:55.663 ******* 2026-02-09 05:47:33.535271 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.535280 | orchestrator | 2026-02-09 05:47:33.535289 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-09 05:47:33.535298 | orchestrator | Monday 09 February 2026 05:47:29 +0000 (0:00:00.159) 0:02:55.822 ******* 2026-02-09 05:47:33.535306 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:47:33.535316 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 05:47:33.535325 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 05:47:33.535333 | orchestrator | 2026-02-09 05:47:33.535342 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-09 05:47:33.535351 | orchestrator | Monday 09 February 2026 05:47:29 +0000 (0:00:00.657) 0:02:56.479 ******* 2026-02-09 05:47:33.535359 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:33.535368 | orchestrator | 2026-02-09 05:47:33.535377 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-09 05:47:33.535390 | orchestrator | Monday 09 February 2026 05:47:30 +0000 (0:00:00.259) 0:02:56.739 ******* 2026-02-09 05:47:33.535399 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:47:33.535408 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 05:47:33.535416 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 05:47:33.535425 | orchestrator | 2026-02-09 05:47:33.535453 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-09 05:47:33.535462 | orchestrator | Monday 09 February 2026 05:47:32 +0000 (0:00:01.906) 0:02:58.645 ******* 2026-02-09 05:47:33.535471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 05:47:33.535480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 05:47:33.535489 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 05:47:33.535498 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:33.535506 | orchestrator | 2026-02-09 05:47:33.535515 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-09 05:47:33.535524 | orchestrator | Monday 09 February 2026 05:47:32 +0000 (0:00:00.414) 0:02:59.060 ******* 2026-02-09 05:47:33.535541 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-09 05:47:33.535552 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-09 05:47:33.535561 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-09 05:47:33.535570 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:33.535578 | orchestrator | 2026-02-09 05:47:33.535587 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-09 05:47:33.535596 | orchestrator | Monday 09 February 2026 05:47:33 +0000 (0:00:00.924) 0:02:59.985 ******* 2026-02-09 05:47:33.535612 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:38.431301 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:38.431401 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:38.431408 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.431414 | orchestrator | 2026-02-09 05:47:38.431419 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-09 05:47:38.431424 | orchestrator | Monday 09 February 2026 05:47:33 +0000 (0:00:00.180) 0:03:00.166 ******* 2026-02-09 05:47:38.431493 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'a495b1786f93', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-09 05:47:30.671196', 'end': '2026-02-09 05:47:30.725478', 'delta': '0:00:00.054282', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a495b1786f93'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-09 05:47:38.431519 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'ab15bd6989cf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-09 05:47:31.228416', 'end': '2026-02-09 05:47:31.274303', 'delta': '0:00:00.045887', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ab15bd6989cf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-09 05:47:38.431542 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '08d9b4f0b230', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-09 05:47:31.801473', 'end': '2026-02-09 05:47:31.855829', 'delta': '0:00:00.054356', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['08d9b4f0b230'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-09 05:47:38.431546 | orchestrator | 2026-02-09 05:47:38.431550 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-09 05:47:38.431554 | orchestrator | Monday 09 February 2026 05:47:33 +0000 (0:00:00.206) 0:03:00.372 ******* 2026-02-09 05:47:38.431558 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:38.431563 | orchestrator | 2026-02-09 05:47:38.431567 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-09 05:47:38.431571 | orchestrator | Monday 09 February 2026 05:47:34 +0000 (0:00:00.291) 0:03:00.663 ******* 2026-02-09 05:47:38.431574 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.431578 | orchestrator | 2026-02-09 05:47:38.431582 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-09 05:47:38.431586 | orchestrator | Monday 09 February 2026 05:47:34 +0000 (0:00:00.861) 0:03:01.525 ******* 2026-02-09 05:47:38.431589 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:38.431593 | orchestrator | 2026-02-09 05:47:38.431597 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-09 05:47:38.431600 | orchestrator | Monday 09 February 2026 05:47:35 +0000 (0:00:00.154) 0:03:01.679 ******* 2026-02-09 05:47:38.431617 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-09 05:47:38.431622 | orchestrator | 2026-02-09 05:47:38.431626 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 05:47:38.431629 | orchestrator | Monday 09 February 2026 05:47:36 +0000 (0:00:01.386) 0:03:03.066 ******* 2026-02-09 05:47:38.431633 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:38.431637 | orchestrator | 2026-02-09 05:47:38.431641 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-09 05:47:38.431644 | orchestrator | Monday 09 February 2026 05:47:36 +0000 (0:00:00.157) 0:03:03.224 ******* 2026-02-09 05:47:38.431648 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.431652 | orchestrator | 2026-02-09 05:47:38.431656 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-09 05:47:38.431659 | orchestrator | Monday 09 February 2026 05:47:36 +0000 (0:00:00.129) 0:03:03.353 ******* 2026-02-09 05:47:38.431663 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.431667 | orchestrator | 2026-02-09 05:47:38.431670 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 05:47:38.431674 | orchestrator | Monday 09 February 2026 05:47:36 +0000 (0:00:00.238) 0:03:03.592 ******* 2026-02-09 05:47:38.431678 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.431681 | orchestrator | 2026-02-09 05:47:38.431685 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-09 05:47:38.431689 | orchestrator | Monday 09 February 2026 05:47:37 +0000 (0:00:00.133) 0:03:03.725 ******* 2026-02-09 05:47:38.431693 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.431696 | orchestrator | 2026-02-09 05:47:38.431700 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-09 05:47:38.431704 | orchestrator | Monday 09 February 2026 05:47:37 +0000 (0:00:00.136) 0:03:03.861 ******* 2026-02-09 05:47:38.431712 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.431715 | orchestrator | 2026-02-09 05:47:38.431719 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-09 05:47:38.431723 | orchestrator | Monday 09 February 2026 05:47:37 +0000 (0:00:00.116) 0:03:03.978 ******* 2026-02-09 05:47:38.431727 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.431730 | orchestrator | 2026-02-09 05:47:38.431734 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-09 05:47:38.431738 | orchestrator | Monday 09 February 2026 05:47:37 +0000 (0:00:00.141) 0:03:04.119 ******* 2026-02-09 05:47:38.431741 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.431745 | orchestrator | 2026-02-09 05:47:38.431749 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-09 05:47:38.431753 | orchestrator | Monday 09 February 2026 05:47:37 +0000 (0:00:00.143) 0:03:04.263 ******* 2026-02-09 05:47:38.431757 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.431761 | orchestrator | 2026-02-09 05:47:38.431767 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-09 05:47:38.431772 | orchestrator | Monday 09 February 2026 05:47:37 +0000 (0:00:00.148) 0:03:04.411 ******* 2026-02-09 05:47:38.431775 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.431779 | orchestrator | 2026-02-09 05:47:38.431783 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-09 05:47:38.431787 | orchestrator | Monday 09 February 2026 05:47:37 +0000 (0:00:00.135) 0:03:04.547 ******* 2026-02-09 05:47:38.431791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:47:38.431795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:47:38.431799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:47:38.431804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-54-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 05:47:38.431814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:47:38.685204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:47:38.685355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:47:38.685403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e53c6ccf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 05:47:38.685422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:47:38.685478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 05:47:38.686319 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:38.686360 | orchestrator | 2026-02-09 05:47:38.686378 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-09 05:47:38.686420 | orchestrator | Monday 09 February 2026 05:47:38 +0000 (0:00:00.515) 0:03:05.063 ******* 2026-02-09 05:47:38.686504 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:38.686521 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:38.686543 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:38.686557 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-54-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:38.686570 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:38.686581 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:38.686612 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:47.842587 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e53c6ccf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:47.842723 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:47.842739 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 05:47:47.842779 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:47.842792 | orchestrator | 2026-02-09 05:47:47.842803 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-09 05:47:47.842815 | orchestrator | Monday 09 February 2026 05:47:38 +0000 (0:00:00.249) 0:03:05.312 ******* 2026-02-09 05:47:47.842824 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:47.842835 | orchestrator | 2026-02-09 05:47:47.842845 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-09 05:47:47.842854 | orchestrator | Monday 09 February 2026 05:47:39 +0000 (0:00:00.560) 0:03:05.873 ******* 2026-02-09 05:47:47.842864 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:47.842873 | orchestrator | 2026-02-09 05:47:47.842882 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 05:47:47.842911 | orchestrator | Monday 09 February 2026 05:47:39 +0000 (0:00:00.150) 0:03:06.024 ******* 2026-02-09 05:47:47.842922 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:47:47.842931 | orchestrator | 2026-02-09 05:47:47.842941 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 05:47:47.842950 | orchestrator | Monday 09 February 2026 05:47:39 +0000 (0:00:00.477) 0:03:06.502 ******* 2026-02-09 05:47:47.842960 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:47.842969 | orchestrator | 2026-02-09 05:47:47.842980 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 05:47:47.842993 | orchestrator | Monday 09 February 2026 05:47:40 +0000 (0:00:00.146) 0:03:06.648 ******* 2026-02-09 05:47:47.843004 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:47.843015 | orchestrator | 2026-02-09 05:47:47.843027 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 05:47:47.843038 | orchestrator | Monday 09 February 2026 05:47:40 +0000 (0:00:00.257) 0:03:06.906 ******* 2026-02-09 05:47:47.843049 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:47.843060 | orchestrator | 2026-02-09 05:47:47.843072 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-09 05:47:47.843083 | orchestrator | Monday 09 February 2026 05:47:40 +0000 (0:00:00.151) 0:03:07.057 ******* 2026-02-09 05:47:47.843095 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:47:47.843106 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-09 05:47:47.843117 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-09 05:47:47.843129 | orchestrator | 2026-02-09 05:47:47.843140 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-09 05:47:47.843151 | orchestrator | Monday 09 February 2026 05:47:41 +0000 (0:00:01.022) 0:03:08.080 ******* 2026-02-09 05:47:47.843170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 05:47:47.843182 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 05:47:47.843193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 05:47:47.843204 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:47.843216 | orchestrator | 2026-02-09 05:47:47.843227 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-09 05:47:47.843238 | orchestrator | Monday 09 February 2026 05:47:41 +0000 (0:00:00.183) 0:03:08.263 ******* 2026-02-09 05:47:47.843248 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:47.843258 | orchestrator | 2026-02-09 05:47:47.843267 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-09 05:47:47.843277 | orchestrator | Monday 09 February 2026 05:47:41 +0000 (0:00:00.147) 0:03:08.410 ******* 2026-02-09 05:47:47.843286 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:47:47.843296 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 05:47:47.843315 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 05:47:47.843324 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-09 05:47:47.843334 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 05:47:47.843343 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 05:47:47.843352 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 05:47:47.843363 | orchestrator | 2026-02-09 05:47:47.843372 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-09 05:47:47.843382 | orchestrator | Monday 09 February 2026 05:47:42 +0000 (0:00:01.106) 0:03:09.517 ******* 2026-02-09 05:47:47.843391 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:47:47.843401 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 05:47:47.843410 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 05:47:47.843420 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-09 05:47:47.843429 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 05:47:47.843459 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 05:47:47.843470 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 05:47:47.843479 | orchestrator | 2026-02-09 05:47:47.843489 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-09 05:47:47.843498 | orchestrator | Monday 09 February 2026 05:47:44 +0000 (0:00:01.942) 0:03:11.460 ******* 2026-02-09 05:47:47.843507 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-09 05:47:47.843517 | orchestrator | 2026-02-09 05:47:47.843526 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-09 05:47:47.843536 | orchestrator | Monday 09 February 2026 05:47:46 +0000 (0:00:01.284) 0:03:12.744 ******* 2026-02-09 05:47:47.843545 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:47.843554 | orchestrator | 2026-02-09 05:47:47.843564 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-09 05:47:47.843573 | orchestrator | Monday 09 February 2026 05:47:46 +0000 (0:00:00.277) 0:03:13.022 ******* 2026-02-09 05:47:47.843583 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:47:47.843592 | orchestrator | 2026-02-09 05:47:47.843602 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-09 05:47:47.843611 | orchestrator | Monday 09 February 2026 05:47:46 +0000 (0:00:00.154) 0:03:13.177 ******* 2026-02-09 05:47:47.843620 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-09 05:47:47.843630 | orchestrator | 2026-02-09 05:47:47.843639 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-09 05:47:47.843655 | orchestrator | Monday 09 February 2026 05:47:47 +0000 (0:00:01.299) 0:03:14.476 ******* 2026-02-09 05:48:13.427404 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.427578 | orchestrator | 2026-02-09 05:48:13.427599 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-09 05:48:13.427612 | orchestrator | Monday 09 February 2026 05:47:47 +0000 (0:00:00.138) 0:03:14.614 ******* 2026-02-09 05:48:13.427624 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:48:13.427634 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 05:48:13.427645 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 05:48:13.427655 | orchestrator | 2026-02-09 05:48:13.427665 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-09 05:48:13.427674 | orchestrator | Monday 09 February 2026 05:47:49 +0000 (0:00:01.541) 0:03:16.155 ******* 2026-02-09 05:48:13.427707 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-09 05:48:13.427717 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-09 05:48:13.427728 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-09 05:48:13.427738 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-09 05:48:13.427748 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-09 05:48:13.427779 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-09 05:48:13.427796 | orchestrator | 2026-02-09 05:48:13.427813 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-09 05:48:13.427829 | orchestrator | Monday 09 February 2026 05:48:01 +0000 (0:00:11.666) 0:03:27.822 ******* 2026-02-09 05:48:13.427847 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:48:13.427862 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:48:13.427878 | orchestrator | 2026-02-09 05:48:13.427895 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-09 05:48:13.427910 | orchestrator | Monday 09 February 2026 05:48:03 +0000 (0:00:02.817) 0:03:30.639 ******* 2026-02-09 05:48:13.427925 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:48:13.427943 | orchestrator | 2026-02-09 05:48:13.427961 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-09 05:48:13.427978 | orchestrator | Monday 09 February 2026 05:48:05 +0000 (0:00:01.407) 0:03:32.046 ******* 2026-02-09 05:48:13.427996 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-09 05:48:13.428015 | orchestrator | 2026-02-09 05:48:13.428032 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-09 05:48:13.428044 | orchestrator | Monday 09 February 2026 05:48:05 +0000 (0:00:00.575) 0:03:32.621 ******* 2026-02-09 05:48:13.428055 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-09 05:48:13.428066 | orchestrator | 2026-02-09 05:48:13.428078 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-09 05:48:13.428089 | orchestrator | Monday 09 February 2026 05:48:06 +0000 (0:00:00.872) 0:03:33.494 ******* 2026-02-09 05:48:13.428100 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:13.428111 | orchestrator | 2026-02-09 05:48:13.428123 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-09 05:48:13.428134 | orchestrator | Monday 09 February 2026 05:48:07 +0000 (0:00:00.544) 0:03:34.039 ******* 2026-02-09 05:48:13.428145 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.428155 | orchestrator | 2026-02-09 05:48:13.428164 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-09 05:48:13.428173 | orchestrator | Monday 09 February 2026 05:48:07 +0000 (0:00:00.152) 0:03:34.191 ******* 2026-02-09 05:48:13.428183 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.428192 | orchestrator | 2026-02-09 05:48:13.428202 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-09 05:48:13.428211 | orchestrator | Monday 09 February 2026 05:48:07 +0000 (0:00:00.156) 0:03:34.347 ******* 2026-02-09 05:48:13.428220 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.428230 | orchestrator | 2026-02-09 05:48:13.428239 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-09 05:48:13.428248 | orchestrator | Monday 09 February 2026 05:48:07 +0000 (0:00:00.163) 0:03:34.511 ******* 2026-02-09 05:48:13.428258 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:13.428267 | orchestrator | 2026-02-09 05:48:13.428277 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-09 05:48:13.428297 | orchestrator | Monday 09 February 2026 05:48:08 +0000 (0:00:00.548) 0:03:35.060 ******* 2026-02-09 05:48:13.428307 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.428316 | orchestrator | 2026-02-09 05:48:13.428326 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-09 05:48:13.428335 | orchestrator | Monday 09 February 2026 05:48:08 +0000 (0:00:00.149) 0:03:35.209 ******* 2026-02-09 05:48:13.428350 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.428366 | orchestrator | 2026-02-09 05:48:13.428382 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-09 05:48:13.428397 | orchestrator | Monday 09 February 2026 05:48:08 +0000 (0:00:00.140) 0:03:35.349 ******* 2026-02-09 05:48:13.428413 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:13.428430 | orchestrator | 2026-02-09 05:48:13.428448 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-09 05:48:13.428488 | orchestrator | Monday 09 February 2026 05:48:09 +0000 (0:00:00.591) 0:03:35.941 ******* 2026-02-09 05:48:13.428504 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:13.428518 | orchestrator | 2026-02-09 05:48:13.428554 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-09 05:48:13.428573 | orchestrator | Monday 09 February 2026 05:48:09 +0000 (0:00:00.577) 0:03:36.519 ******* 2026-02-09 05:48:13.428590 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.428606 | orchestrator | 2026-02-09 05:48:13.428622 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-09 05:48:13.428632 | orchestrator | Monday 09 February 2026 05:48:10 +0000 (0:00:00.128) 0:03:36.647 ******* 2026-02-09 05:48:13.428641 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:13.428651 | orchestrator | 2026-02-09 05:48:13.428660 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-09 05:48:13.428669 | orchestrator | Monday 09 February 2026 05:48:10 +0000 (0:00:00.163) 0:03:36.811 ******* 2026-02-09 05:48:13.428678 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.428688 | orchestrator | 2026-02-09 05:48:13.428697 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-09 05:48:13.428706 | orchestrator | Monday 09 February 2026 05:48:10 +0000 (0:00:00.182) 0:03:36.993 ******* 2026-02-09 05:48:13.428716 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.428725 | orchestrator | 2026-02-09 05:48:13.428734 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-09 05:48:13.428744 | orchestrator | Monday 09 February 2026 05:48:10 +0000 (0:00:00.141) 0:03:37.135 ******* 2026-02-09 05:48:13.428753 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.428763 | orchestrator | 2026-02-09 05:48:13.428772 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-09 05:48:13.428781 | orchestrator | Monday 09 February 2026 05:48:10 +0000 (0:00:00.426) 0:03:37.562 ******* 2026-02-09 05:48:13.428791 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.428800 | orchestrator | 2026-02-09 05:48:13.428817 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-09 05:48:13.428827 | orchestrator | Monday 09 February 2026 05:48:11 +0000 (0:00:00.171) 0:03:37.734 ******* 2026-02-09 05:48:13.428836 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.428846 | orchestrator | 2026-02-09 05:48:13.428855 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-09 05:48:13.428864 | orchestrator | Monday 09 February 2026 05:48:11 +0000 (0:00:00.132) 0:03:37.866 ******* 2026-02-09 05:48:13.428874 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:13.428883 | orchestrator | 2026-02-09 05:48:13.428892 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-09 05:48:13.428902 | orchestrator | Monday 09 February 2026 05:48:11 +0000 (0:00:00.163) 0:03:38.030 ******* 2026-02-09 05:48:13.428911 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:13.428920 | orchestrator | 2026-02-09 05:48:13.428929 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-09 05:48:13.428959 | orchestrator | Monday 09 February 2026 05:48:11 +0000 (0:00:00.150) 0:03:38.180 ******* 2026-02-09 05:48:13.428969 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:13.428979 | orchestrator | 2026-02-09 05:48:13.428988 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-09 05:48:13.428997 | orchestrator | Monday 09 February 2026 05:48:11 +0000 (0:00:00.229) 0:03:38.410 ******* 2026-02-09 05:48:13.429007 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.429016 | orchestrator | 2026-02-09 05:48:13.429025 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-09 05:48:13.429035 | orchestrator | Monday 09 February 2026 05:48:11 +0000 (0:00:00.129) 0:03:38.540 ******* 2026-02-09 05:48:13.429044 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.429053 | orchestrator | 2026-02-09 05:48:13.429063 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-09 05:48:13.429072 | orchestrator | Monday 09 February 2026 05:48:12 +0000 (0:00:00.129) 0:03:38.670 ******* 2026-02-09 05:48:13.429082 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.429091 | orchestrator | 2026-02-09 05:48:13.429100 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-09 05:48:13.429110 | orchestrator | Monday 09 February 2026 05:48:12 +0000 (0:00:00.135) 0:03:38.805 ******* 2026-02-09 05:48:13.429119 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.429128 | orchestrator | 2026-02-09 05:48:13.429138 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-09 05:48:13.429147 | orchestrator | Monday 09 February 2026 05:48:12 +0000 (0:00:00.142) 0:03:38.947 ******* 2026-02-09 05:48:13.429156 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.429166 | orchestrator | 2026-02-09 05:48:13.429175 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-09 05:48:13.429185 | orchestrator | Monday 09 February 2026 05:48:12 +0000 (0:00:00.125) 0:03:39.072 ******* 2026-02-09 05:48:13.429194 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.429203 | orchestrator | 2026-02-09 05:48:13.429213 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-09 05:48:13.429222 | orchestrator | Monday 09 February 2026 05:48:12 +0000 (0:00:00.142) 0:03:39.215 ******* 2026-02-09 05:48:13.429232 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.429241 | orchestrator | 2026-02-09 05:48:13.429250 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-09 05:48:13.429260 | orchestrator | Monday 09 February 2026 05:48:12 +0000 (0:00:00.425) 0:03:39.641 ******* 2026-02-09 05:48:13.429269 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.429279 | orchestrator | 2026-02-09 05:48:13.429288 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-09 05:48:13.429298 | orchestrator | Monday 09 February 2026 05:48:13 +0000 (0:00:00.143) 0:03:39.784 ******* 2026-02-09 05:48:13.429307 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.429316 | orchestrator | 2026-02-09 05:48:13.429326 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-09 05:48:13.429335 | orchestrator | Monday 09 February 2026 05:48:13 +0000 (0:00:00.135) 0:03:39.920 ******* 2026-02-09 05:48:13.429344 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:13.429354 | orchestrator | 2026-02-09 05:48:13.429363 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-09 05:48:13.429372 | orchestrator | Monday 09 February 2026 05:48:13 +0000 (0:00:00.136) 0:03:40.057 ******* 2026-02-09 05:48:32.438801 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.438956 | orchestrator | 2026-02-09 05:48:32.438976 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-09 05:48:32.438990 | orchestrator | Monday 09 February 2026 05:48:13 +0000 (0:00:00.135) 0:03:40.192 ******* 2026-02-09 05:48:32.439002 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439014 | orchestrator | 2026-02-09 05:48:32.439025 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-09 05:48:32.439067 | orchestrator | Monday 09 February 2026 05:48:13 +0000 (0:00:00.216) 0:03:40.409 ******* 2026-02-09 05:48:32.439079 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:32.439091 | orchestrator | 2026-02-09 05:48:32.439102 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-09 05:48:32.439113 | orchestrator | Monday 09 February 2026 05:48:14 +0000 (0:00:01.022) 0:03:41.431 ******* 2026-02-09 05:48:32.439124 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:32.439134 | orchestrator | 2026-02-09 05:48:32.439146 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-09 05:48:32.439157 | orchestrator | Monday 09 February 2026 05:48:16 +0000 (0:00:01.354) 0:03:42.786 ******* 2026-02-09 05:48:32.439168 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-09 05:48:32.439180 | orchestrator | 2026-02-09 05:48:32.439191 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-09 05:48:32.439202 | orchestrator | Monday 09 February 2026 05:48:16 +0000 (0:00:00.588) 0:03:43.374 ******* 2026-02-09 05:48:32.439214 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439225 | orchestrator | 2026-02-09 05:48:32.439253 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-09 05:48:32.439264 | orchestrator | Monday 09 February 2026 05:48:16 +0000 (0:00:00.150) 0:03:43.525 ******* 2026-02-09 05:48:32.439275 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439286 | orchestrator | 2026-02-09 05:48:32.439297 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-09 05:48:32.439307 | orchestrator | Monday 09 February 2026 05:48:17 +0000 (0:00:00.127) 0:03:43.653 ******* 2026-02-09 05:48:32.439318 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-09 05:48:32.439328 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-09 05:48:32.439340 | orchestrator | 2026-02-09 05:48:32.439351 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-09 05:48:32.439362 | orchestrator | Monday 09 February 2026 05:48:18 +0000 (0:00:01.127) 0:03:44.781 ******* 2026-02-09 05:48:32.439372 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:32.439383 | orchestrator | 2026-02-09 05:48:32.439394 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-09 05:48:32.439404 | orchestrator | Monday 09 February 2026 05:48:18 +0000 (0:00:00.655) 0:03:45.436 ******* 2026-02-09 05:48:32.439415 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439426 | orchestrator | 2026-02-09 05:48:32.439436 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-09 05:48:32.439447 | orchestrator | Monday 09 February 2026 05:48:18 +0000 (0:00:00.157) 0:03:45.594 ******* 2026-02-09 05:48:32.439457 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439497 | orchestrator | 2026-02-09 05:48:32.439509 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-09 05:48:32.439520 | orchestrator | Monday 09 February 2026 05:48:19 +0000 (0:00:00.142) 0:03:45.736 ******* 2026-02-09 05:48:32.439530 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439541 | orchestrator | 2026-02-09 05:48:32.439552 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-09 05:48:32.439563 | orchestrator | Monday 09 February 2026 05:48:19 +0000 (0:00:00.162) 0:03:45.899 ******* 2026-02-09 05:48:32.439573 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-09 05:48:32.439584 | orchestrator | 2026-02-09 05:48:32.439594 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-09 05:48:32.439605 | orchestrator | Monday 09 February 2026 05:48:19 +0000 (0:00:00.592) 0:03:46.492 ******* 2026-02-09 05:48:32.439616 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:32.439627 | orchestrator | 2026-02-09 05:48:32.439638 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-09 05:48:32.439657 | orchestrator | Monday 09 February 2026 05:48:20 +0000 (0:00:00.723) 0:03:47.215 ******* 2026-02-09 05:48:32.439668 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-09 05:48:32.439679 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-09 05:48:32.439689 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-09 05:48:32.439700 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439711 | orchestrator | 2026-02-09 05:48:32.439722 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-09 05:48:32.439732 | orchestrator | Monday 09 February 2026 05:48:20 +0000 (0:00:00.158) 0:03:47.374 ******* 2026-02-09 05:48:32.439743 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439754 | orchestrator | 2026-02-09 05:48:32.439764 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-09 05:48:32.439775 | orchestrator | Monday 09 February 2026 05:48:20 +0000 (0:00:00.130) 0:03:47.505 ******* 2026-02-09 05:48:32.439786 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439796 | orchestrator | 2026-02-09 05:48:32.439807 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-09 05:48:32.439818 | orchestrator | Monday 09 February 2026 05:48:21 +0000 (0:00:00.204) 0:03:47.709 ******* 2026-02-09 05:48:32.439829 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439840 | orchestrator | 2026-02-09 05:48:32.439851 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-09 05:48:32.439880 | orchestrator | Monday 09 February 2026 05:48:21 +0000 (0:00:00.155) 0:03:47.865 ******* 2026-02-09 05:48:32.439891 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439902 | orchestrator | 2026-02-09 05:48:32.439913 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-09 05:48:32.439924 | orchestrator | Monday 09 February 2026 05:48:21 +0000 (0:00:00.160) 0:03:48.025 ******* 2026-02-09 05:48:32.439935 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.439945 | orchestrator | 2026-02-09 05:48:32.439956 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-09 05:48:32.439967 | orchestrator | Monday 09 February 2026 05:48:21 +0000 (0:00:00.425) 0:03:48.451 ******* 2026-02-09 05:48:32.439978 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:32.439988 | orchestrator | 2026-02-09 05:48:32.439999 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-09 05:48:32.440010 | orchestrator | Monday 09 February 2026 05:48:23 +0000 (0:00:01.530) 0:03:49.981 ******* 2026-02-09 05:48:32.440020 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:32.440031 | orchestrator | 2026-02-09 05:48:32.440042 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-09 05:48:32.440053 | orchestrator | Monday 09 February 2026 05:48:23 +0000 (0:00:00.159) 0:03:50.141 ******* 2026-02-09 05:48:32.440063 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-09 05:48:32.440074 | orchestrator | 2026-02-09 05:48:32.440085 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-09 05:48:32.440095 | orchestrator | Monday 09 February 2026 05:48:24 +0000 (0:00:00.659) 0:03:50.801 ******* 2026-02-09 05:48:32.440106 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.440117 | orchestrator | 2026-02-09 05:48:32.440133 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-09 05:48:32.440144 | orchestrator | Monday 09 February 2026 05:48:24 +0000 (0:00:00.157) 0:03:50.959 ******* 2026-02-09 05:48:32.440155 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.440166 | orchestrator | 2026-02-09 05:48:32.440177 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-09 05:48:32.440187 | orchestrator | Monday 09 February 2026 05:48:24 +0000 (0:00:00.161) 0:03:51.120 ******* 2026-02-09 05:48:32.440198 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.440209 | orchestrator | 2026-02-09 05:48:32.440227 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-09 05:48:32.440238 | orchestrator | Monday 09 February 2026 05:48:24 +0000 (0:00:00.152) 0:03:51.273 ******* 2026-02-09 05:48:32.440249 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.440259 | orchestrator | 2026-02-09 05:48:32.440270 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-09 05:48:32.440281 | orchestrator | Monday 09 February 2026 05:48:24 +0000 (0:00:00.149) 0:03:51.423 ******* 2026-02-09 05:48:32.440292 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.440303 | orchestrator | 2026-02-09 05:48:32.440313 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-09 05:48:32.440324 | orchestrator | Monday 09 February 2026 05:48:24 +0000 (0:00:00.151) 0:03:51.574 ******* 2026-02-09 05:48:32.440335 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.440345 | orchestrator | 2026-02-09 05:48:32.440356 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-09 05:48:32.440367 | orchestrator | Monday 09 February 2026 05:48:25 +0000 (0:00:00.185) 0:03:51.760 ******* 2026-02-09 05:48:32.440378 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.440388 | orchestrator | 2026-02-09 05:48:32.440399 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-09 05:48:32.440410 | orchestrator | Monday 09 February 2026 05:48:25 +0000 (0:00:00.166) 0:03:51.926 ******* 2026-02-09 05:48:32.440421 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:32.440431 | orchestrator | 2026-02-09 05:48:32.440442 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-09 05:48:32.440453 | orchestrator | Monday 09 February 2026 05:48:25 +0000 (0:00:00.151) 0:03:52.078 ******* 2026-02-09 05:48:32.440464 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:32.440497 | orchestrator | 2026-02-09 05:48:32.440508 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-09 05:48:32.440518 | orchestrator | Monday 09 February 2026 05:48:26 +0000 (0:00:00.575) 0:03:52.654 ******* 2026-02-09 05:48:32.440529 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-09 05:48:32.440540 | orchestrator | 2026-02-09 05:48:32.440550 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-09 05:48:32.440561 | orchestrator | Monday 09 February 2026 05:48:26 +0000 (0:00:00.596) 0:03:53.250 ******* 2026-02-09 05:48:32.440572 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-09 05:48:32.440584 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-09 05:48:32.440594 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-09 05:48:32.440605 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-09 05:48:32.440616 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-09 05:48:32.440627 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-09 05:48:32.440637 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-09 05:48:32.440648 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-09 05:48:32.440659 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-09 05:48:32.440670 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-09 05:48:32.440680 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-09 05:48:32.440691 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-09 05:48:32.440702 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-09 05:48:32.440713 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-09 05:48:32.440729 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-09 05:48:45.587597 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-09 05:48:45.587755 | orchestrator | 2026-02-09 05:48:45.587774 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-09 05:48:45.587820 | orchestrator | Monday 09 February 2026 05:48:32 +0000 (0:00:05.805) 0:03:59.055 ******* 2026-02-09 05:48:45.587833 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.587848 | orchestrator | 2026-02-09 05:48:45.587860 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-09 05:48:45.587873 | orchestrator | Monday 09 February 2026 05:48:32 +0000 (0:00:00.142) 0:03:59.198 ******* 2026-02-09 05:48:45.587885 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.587897 | orchestrator | 2026-02-09 05:48:45.587909 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-09 05:48:45.587922 | orchestrator | Monday 09 February 2026 05:48:32 +0000 (0:00:00.139) 0:03:59.337 ******* 2026-02-09 05:48:45.587933 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.587946 | orchestrator | 2026-02-09 05:48:45.587959 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-09 05:48:45.587971 | orchestrator | Monday 09 February 2026 05:48:32 +0000 (0:00:00.157) 0:03:59.495 ******* 2026-02-09 05:48:45.587983 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.587995 | orchestrator | 2026-02-09 05:48:45.588006 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-09 05:48:45.588018 | orchestrator | Monday 09 February 2026 05:48:33 +0000 (0:00:00.153) 0:03:59.649 ******* 2026-02-09 05:48:45.588030 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588042 | orchestrator | 2026-02-09 05:48:45.588073 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-09 05:48:45.588085 | orchestrator | Monday 09 February 2026 05:48:33 +0000 (0:00:00.147) 0:03:59.796 ******* 2026-02-09 05:48:45.588093 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588102 | orchestrator | 2026-02-09 05:48:45.588111 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-09 05:48:45.588120 | orchestrator | Monday 09 February 2026 05:48:33 +0000 (0:00:00.144) 0:03:59.941 ******* 2026-02-09 05:48:45.588128 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588137 | orchestrator | 2026-02-09 05:48:45.588145 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-09 05:48:45.588153 | orchestrator | Monday 09 February 2026 05:48:33 +0000 (0:00:00.144) 0:04:00.085 ******* 2026-02-09 05:48:45.588161 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588170 | orchestrator | 2026-02-09 05:48:45.588178 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-09 05:48:45.588187 | orchestrator | Monday 09 February 2026 05:48:33 +0000 (0:00:00.118) 0:04:00.204 ******* 2026-02-09 05:48:45.588196 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588204 | orchestrator | 2026-02-09 05:48:45.588213 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-09 05:48:45.588221 | orchestrator | Monday 09 February 2026 05:48:33 +0000 (0:00:00.141) 0:04:00.345 ******* 2026-02-09 05:48:45.588230 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588239 | orchestrator | 2026-02-09 05:48:45.588247 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-09 05:48:45.588255 | orchestrator | Monday 09 February 2026 05:48:34 +0000 (0:00:00.391) 0:04:00.736 ******* 2026-02-09 05:48:45.588264 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588272 | orchestrator | 2026-02-09 05:48:45.588280 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-09 05:48:45.588288 | orchestrator | Monday 09 February 2026 05:48:34 +0000 (0:00:00.137) 0:04:00.874 ******* 2026-02-09 05:48:45.588296 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588304 | orchestrator | 2026-02-09 05:48:45.588312 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-09 05:48:45.588321 | orchestrator | Monday 09 February 2026 05:48:34 +0000 (0:00:00.151) 0:04:01.025 ******* 2026-02-09 05:48:45.588329 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588345 | orchestrator | 2026-02-09 05:48:45.588353 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-09 05:48:45.588362 | orchestrator | Monday 09 February 2026 05:48:34 +0000 (0:00:00.234) 0:04:01.260 ******* 2026-02-09 05:48:45.588369 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588377 | orchestrator | 2026-02-09 05:48:45.588385 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-09 05:48:45.588393 | orchestrator | Monday 09 February 2026 05:48:34 +0000 (0:00:00.132) 0:04:01.393 ******* 2026-02-09 05:48:45.588402 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588410 | orchestrator | 2026-02-09 05:48:45.588418 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-09 05:48:45.588427 | orchestrator | Monday 09 February 2026 05:48:35 +0000 (0:00:00.258) 0:04:01.651 ******* 2026-02-09 05:48:45.588434 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588441 | orchestrator | 2026-02-09 05:48:45.588448 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-09 05:48:45.588455 | orchestrator | Monday 09 February 2026 05:48:35 +0000 (0:00:00.141) 0:04:01.793 ******* 2026-02-09 05:48:45.588462 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588469 | orchestrator | 2026-02-09 05:48:45.588497 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-09 05:48:45.588506 | orchestrator | Monday 09 February 2026 05:48:35 +0000 (0:00:00.144) 0:04:01.938 ******* 2026-02-09 05:48:45.588513 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588520 | orchestrator | 2026-02-09 05:48:45.588527 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-09 05:48:45.588534 | orchestrator | Monday 09 February 2026 05:48:35 +0000 (0:00:00.133) 0:04:02.071 ******* 2026-02-09 05:48:45.588541 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588548 | orchestrator | 2026-02-09 05:48:45.588570 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-09 05:48:45.588578 | orchestrator | Monday 09 February 2026 05:48:35 +0000 (0:00:00.135) 0:04:02.207 ******* 2026-02-09 05:48:45.588585 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588592 | orchestrator | 2026-02-09 05:48:45.588599 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-09 05:48:45.588606 | orchestrator | Monday 09 February 2026 05:48:35 +0000 (0:00:00.151) 0:04:02.358 ******* 2026-02-09 05:48:45.588613 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588620 | orchestrator | 2026-02-09 05:48:45.588627 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-09 05:48:45.588634 | orchestrator | Monday 09 February 2026 05:48:35 +0000 (0:00:00.146) 0:04:02.504 ******* 2026-02-09 05:48:45.588641 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-09 05:48:45.588649 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-09 05:48:45.588656 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-09 05:48:45.588663 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588670 | orchestrator | 2026-02-09 05:48:45.588677 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-09 05:48:45.588684 | orchestrator | Monday 09 February 2026 05:48:36 +0000 (0:00:00.729) 0:04:03.234 ******* 2026-02-09 05:48:45.588691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-09 05:48:45.588698 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-09 05:48:45.588705 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-09 05:48:45.588712 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588719 | orchestrator | 2026-02-09 05:48:45.588730 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-09 05:48:45.588738 | orchestrator | Monday 09 February 2026 05:48:37 +0000 (0:00:01.018) 0:04:04.252 ******* 2026-02-09 05:48:45.588745 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-09 05:48:45.588757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-09 05:48:45.588764 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-09 05:48:45.588771 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588779 | orchestrator | 2026-02-09 05:48:45.588786 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-09 05:48:45.588793 | orchestrator | Monday 09 February 2026 05:48:38 +0000 (0:00:00.415) 0:04:04.668 ******* 2026-02-09 05:48:45.588800 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588807 | orchestrator | 2026-02-09 05:48:45.588814 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-09 05:48:45.588821 | orchestrator | Monday 09 February 2026 05:48:38 +0000 (0:00:00.136) 0:04:04.804 ******* 2026-02-09 05:48:45.588828 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-09 05:48:45.588835 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588842 | orchestrator | 2026-02-09 05:48:45.588849 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-09 05:48:45.588856 | orchestrator | Monday 09 February 2026 05:48:38 +0000 (0:00:00.602) 0:04:05.407 ******* 2026-02-09 05:48:45.588863 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:48:45.588870 | orchestrator | 2026-02-09 05:48:45.588877 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-09 05:48:45.588884 | orchestrator | Monday 09 February 2026 05:48:39 +0000 (0:00:00.856) 0:04:06.263 ******* 2026-02-09 05:48:45.588891 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:45.588898 | orchestrator | 2026-02-09 05:48:45.588905 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-09 05:48:45.588912 | orchestrator | Monday 09 February 2026 05:48:39 +0000 (0:00:00.190) 0:04:06.454 ******* 2026-02-09 05:48:45.588919 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-09 05:48:45.588927 | orchestrator | 2026-02-09 05:48:45.588934 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-09 05:48:45.588941 | orchestrator | Monday 09 February 2026 05:48:40 +0000 (0:00:00.633) 0:04:07.088 ******* 2026-02-09 05:48:45.588948 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-09 05:48:45.588955 | orchestrator | 2026-02-09 05:48:45.588962 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-09 05:48:45.588969 | orchestrator | Monday 09 February 2026 05:48:42 +0000 (0:00:02.128) 0:04:09.217 ******* 2026-02-09 05:48:45.588976 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:48:45.588983 | orchestrator | 2026-02-09 05:48:45.588990 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-09 05:48:45.588997 | orchestrator | Monday 09 February 2026 05:48:42 +0000 (0:00:00.193) 0:04:09.410 ******* 2026-02-09 05:48:45.589005 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:45.589017 | orchestrator | 2026-02-09 05:48:45.589029 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-09 05:48:45.589042 | orchestrator | Monday 09 February 2026 05:48:42 +0000 (0:00:00.180) 0:04:09.590 ******* 2026-02-09 05:48:45.589053 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:45.589065 | orchestrator | 2026-02-09 05:48:45.589076 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-09 05:48:45.589089 | orchestrator | Monday 09 February 2026 05:48:43 +0000 (0:00:00.465) 0:04:10.056 ******* 2026-02-09 05:48:45.589100 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:48:45.589113 | orchestrator | 2026-02-09 05:48:45.589120 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-09 05:48:45.589127 | orchestrator | Monday 09 February 2026 05:48:44 +0000 (0:00:01.042) 0:04:11.099 ******* 2026-02-09 05:48:45.589134 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:45.589141 | orchestrator | 2026-02-09 05:48:45.589151 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-09 05:48:45.589164 | orchestrator | Monday 09 February 2026 05:48:45 +0000 (0:00:00.634) 0:04:11.734 ******* 2026-02-09 05:48:45.589185 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:48:45.589197 | orchestrator | 2026-02-09 05:48:45.589217 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-09 05:49:39.045738 | orchestrator | Monday 09 February 2026 05:48:45 +0000 (0:00:00.479) 0:04:12.213 ******* 2026-02-09 05:49:39.045869 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:49:39.045884 | orchestrator | 2026-02-09 05:49:39.045892 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-09 05:49:39.045899 | orchestrator | Monday 09 February 2026 05:48:46 +0000 (0:00:00.486) 0:04:12.700 ******* 2026-02-09 05:49:39.045906 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:49:39.045912 | orchestrator | 2026-02-09 05:49:39.045919 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-09 05:49:39.045929 | orchestrator | Monday 09 February 2026 05:48:46 +0000 (0:00:00.806) 0:04:13.507 ******* 2026-02-09 05:49:39.045940 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:49:39.045952 | orchestrator | 2026-02-09 05:49:39.045963 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-09 05:49:39.045973 | orchestrator | Monday 09 February 2026 05:48:47 +0000 (0:00:00.751) 0:04:14.258 ******* 2026-02-09 05:49:39.045984 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-09 05:49:39.045996 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-09 05:49:39.046007 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-09 05:49:39.046071 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-09 05:49:39.046084 | orchestrator | 2026-02-09 05:49:39.046095 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-09 05:49:39.046106 | orchestrator | Monday 09 February 2026 05:48:50 +0000 (0:00:02.852) 0:04:17.110 ******* 2026-02-09 05:49:39.046133 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:49:39.046145 | orchestrator | 2026-02-09 05:49:39.046156 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-09 05:49:39.046167 | orchestrator | Monday 09 February 2026 05:48:51 +0000 (0:00:01.049) 0:04:18.159 ******* 2026-02-09 05:49:39.046178 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:49:39.046189 | orchestrator | 2026-02-09 05:49:39.046201 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-09 05:49:39.046212 | orchestrator | Monday 09 February 2026 05:48:51 +0000 (0:00:00.153) 0:04:18.313 ******* 2026-02-09 05:49:39.046223 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:49:39.046233 | orchestrator | 2026-02-09 05:49:39.046239 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-09 05:49:39.046245 | orchestrator | Monday 09 February 2026 05:48:51 +0000 (0:00:00.147) 0:04:18.461 ******* 2026-02-09 05:49:39.046256 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:49:39.046267 | orchestrator | 2026-02-09 05:49:39.046277 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-09 05:49:39.046284 | orchestrator | Monday 09 February 2026 05:48:52 +0000 (0:00:01.021) 0:04:19.482 ******* 2026-02-09 05:49:39.046290 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:49:39.046296 | orchestrator | 2026-02-09 05:49:39.046303 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-09 05:49:39.046309 | orchestrator | Monday 09 February 2026 05:48:53 +0000 (0:00:00.500) 0:04:19.983 ******* 2026-02-09 05:49:39.046315 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:49:39.046323 | orchestrator | 2026-02-09 05:49:39.046341 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-09 05:49:39.046348 | orchestrator | Monday 09 February 2026 05:48:53 +0000 (0:00:00.397) 0:04:20.381 ******* 2026-02-09 05:49:39.046355 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-09 05:49:39.046362 | orchestrator | 2026-02-09 05:49:39.046368 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-09 05:49:39.046375 | orchestrator | Monday 09 February 2026 05:48:54 +0000 (0:00:00.571) 0:04:20.952 ******* 2026-02-09 05:49:39.046400 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:49:39.046407 | orchestrator | 2026-02-09 05:49:39.046413 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-09 05:49:39.046419 | orchestrator | Monday 09 February 2026 05:48:54 +0000 (0:00:00.131) 0:04:21.084 ******* 2026-02-09 05:49:39.046425 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:49:39.046431 | orchestrator | 2026-02-09 05:49:39.046437 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-09 05:49:39.046443 | orchestrator | Monday 09 February 2026 05:48:54 +0000 (0:00:00.139) 0:04:21.223 ******* 2026-02-09 05:49:39.046449 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-09 05:49:39.046485 | orchestrator | 2026-02-09 05:49:39.046492 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-09 05:49:39.046499 | orchestrator | Monday 09 February 2026 05:48:55 +0000 (0:00:00.575) 0:04:21.799 ******* 2026-02-09 05:49:39.046521 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:49:39.046530 | orchestrator | 2026-02-09 05:49:39.046540 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-09 05:49:39.046550 | orchestrator | Monday 09 February 2026 05:48:56 +0000 (0:00:01.415) 0:04:23.214 ******* 2026-02-09 05:49:39.046560 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:49:39.046571 | orchestrator | 2026-02-09 05:49:39.046581 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-09 05:49:39.046588 | orchestrator | Monday 09 February 2026 05:48:57 +0000 (0:00:01.035) 0:04:24.250 ******* 2026-02-09 05:49:39.046594 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:49:39.046600 | orchestrator | 2026-02-09 05:49:39.046606 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-09 05:49:39.046612 | orchestrator | Monday 09 February 2026 05:48:58 +0000 (0:00:01.379) 0:04:25.629 ******* 2026-02-09 05:49:39.046622 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:49:39.046633 | orchestrator | 2026-02-09 05:49:39.046643 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-09 05:49:39.046653 | orchestrator | Monday 09 February 2026 05:49:02 +0000 (0:00:03.145) 0:04:28.775 ******* 2026-02-09 05:49:39.046663 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-09 05:49:39.046674 | orchestrator | 2026-02-09 05:49:39.046701 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-09 05:49:39.046712 | orchestrator | Monday 09 February 2026 05:49:02 +0000 (0:00:00.624) 0:04:29.399 ******* 2026-02-09 05:49:39.046723 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-09 05:49:39.046734 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:49:39.046745 | orchestrator | 2026-02-09 05:49:39.046752 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-09 05:49:39.046760 | orchestrator | Monday 09 February 2026 05:49:24 +0000 (0:00:22.240) 0:04:51.639 ******* 2026-02-09 05:49:39.046771 | orchestrator | ok: [testbed-node-0] 2026-02-09 05:49:39.046781 | orchestrator | 2026-02-09 05:49:39.046791 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-09 05:49:39.046802 | orchestrator | Monday 09 February 2026 05:49:26 +0000 (0:00:01.959) 0:04:53.599 ******* 2026-02-09 05:49:39.046812 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:49:39.046821 | orchestrator | 2026-02-09 05:49:39.046831 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-09 05:49:39.046841 | orchestrator | Monday 09 February 2026 05:49:27 +0000 (0:00:00.136) 0:04:53.735 ******* 2026-02-09 05:49:39.046860 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-09 05:49:39.046884 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-09 05:49:39.046932 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-09 05:49:39.046944 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-09 05:49:39.046957 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-09 05:49:39.046969 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__46c2ee9658d408e9ea3e51d4e5a4c7165b811b1a'}])  2026-02-09 05:49:39.046981 | orchestrator | 2026-02-09 05:49:39.046994 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-09 05:49:39.047003 | orchestrator | Monday 09 February 2026 05:49:35 +0000 (0:00:08.891) 0:05:02.626 ******* 2026-02-09 05:49:39.047009 | orchestrator | changed: [testbed-node-0] 2026-02-09 05:49:39.047016 | orchestrator | 2026-02-09 05:49:39.047022 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-09 05:49:39.047028 | orchestrator | Monday 09 February 2026 05:49:37 +0000 (0:00:01.423) 0:05:04.050 ******* 2026-02-09 05:49:39.047034 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 05:49:39.047062 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-09 05:49:39.047069 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-09 05:49:39.047076 | orchestrator | 2026-02-09 05:49:39.047082 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-09 05:49:39.047088 | orchestrator | Monday 09 February 2026 05:49:38 +0000 (0:00:01.135) 0:05:05.186 ******* 2026-02-09 05:49:39.047094 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 05:49:39.047101 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 05:49:39.047107 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 05:49:39.047113 | orchestrator | skipping: [testbed-node-0] 2026-02-09 05:49:39.047119 | orchestrator | 2026-02-09 05:49:39.047133 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-09 06:20:58.189692 | orchestrator | Monday 09 February 2026 05:49:39 +0000 (0:00:00.486) 0:05:05.673 ******* 2026-02-09 06:20:58.189822 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:20:58.189836 | orchestrator | 2026-02-09 06:20:58.189845 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-09 06:20:58.189854 | orchestrator | Monday 09 February 2026 05:49:39 +0000 (0:00:00.129) 0:05:05.803 ******* 2026-02-09 06:20:58.189886 | orchestrator | 2026-02-09 06:20:58.189894 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.189903 | orchestrator | 2026-02-09 06:20:58.189910 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.189917 | orchestrator | 2026-02-09 06:20:58.189925 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.189932 | orchestrator | 2026-02-09 06:20:58.189939 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.189947 | orchestrator | 2026-02-09 06:20:58.189955 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.189962 | orchestrator | 2026-02-09 06:20:58.190071 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190095 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-02-09 06:20:58.190105 | orchestrator | 2026-02-09 06:20:58.190112 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190119 | orchestrator | 2026-02-09 06:20:58.190126 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190133 | orchestrator | 2026-02-09 06:20:58.190141 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190148 | orchestrator | 2026-02-09 06:20:58.190164 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190172 | orchestrator | 2026-02-09 06:20:58.190179 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190186 | orchestrator | 2026-02-09 06:20:58.190193 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190200 | orchestrator | 2026-02-09 06:20:58.190210 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190218 | orchestrator | 2026-02-09 06:20:58.190226 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190235 | orchestrator | 2026-02-09 06:20:58.190243 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190251 | orchestrator | 2026-02-09 06:20:58.190260 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190269 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-02-09 06:20:58.190278 | orchestrator | 2026-02-09 06:20:58.190285 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190293 | orchestrator | 2026-02-09 06:20:58.190302 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190310 | orchestrator | 2026-02-09 06:20:58.190318 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190326 | orchestrator | 2026-02-09 06:20:58.190334 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190343 | orchestrator | 2026-02-09 06:20:58.190351 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190359 | orchestrator | 2026-02-09 06:20:58.190367 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190375 | orchestrator | 2026-02-09 06:20:58.190394 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190402 | orchestrator | 2026-02-09 06:20:58.190410 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190419 | orchestrator | 2026-02-09 06:20:58.190426 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190435 | orchestrator | 2026-02-09 06:20:58.190443 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190451 | orchestrator | 2026-02-09 06:20:58.190459 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190468 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-02-09 06:20:58.190476 | orchestrator | 2026-02-09 06:20:58.190483 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190489 | orchestrator | 2026-02-09 06:20:58.190497 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190504 | orchestrator | 2026-02-09 06:20:58.190511 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190518 | orchestrator | 2026-02-09 06:20:58.190525 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190532 | orchestrator | 2026-02-09 06:20:58.190556 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190564 | orchestrator | 2026-02-09 06:20:58.190571 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190578 | orchestrator | 2026-02-09 06:20:58.190585 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190592 | orchestrator | 2026-02-09 06:20:58.190600 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190607 | orchestrator | 2026-02-09 06:20:58.190614 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190621 | orchestrator | 2026-02-09 06:20:58.190628 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190635 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-02-09 06:20:58.190642 | orchestrator | 2026-02-09 06:20:58.190649 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190656 | orchestrator | 2026-02-09 06:20:58.190663 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190670 | orchestrator | 2026-02-09 06:20:58.190683 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190690 | orchestrator | 2026-02-09 06:20:58.190697 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190704 | orchestrator | 2026-02-09 06:20:58.190711 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190718 | orchestrator | 2026-02-09 06:20:58.190725 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190732 | orchestrator | 2026-02-09 06:20:58.190739 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190746 | orchestrator | 2026-02-09 06:20:58.190754 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190766 | orchestrator | 2026-02-09 06:20:58.190773 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190780 | orchestrator | 2026-02-09 06:20:58.190787 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190794 | orchestrator | 2026-02-09 06:20:58.190801 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190808 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-02-09 06:20:58.190816 | orchestrator | 2026-02-09 06:20:58.190823 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190830 | orchestrator | 2026-02-09 06:20:58.190837 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190844 | orchestrator | 2026-02-09 06:20:58.190851 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190858 | orchestrator | 2026-02-09 06:20:58.190865 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190872 | orchestrator | 2026-02-09 06:20:58.190879 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190886 | orchestrator | 2026-02-09 06:20:58.190893 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190900 | orchestrator | 2026-02-09 06:20:58.190907 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190914 | orchestrator | 2026-02-09 06:20:58.190921 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190928 | orchestrator | 2026-02-09 06:20:58.190935 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190942 | orchestrator | 2026-02-09 06:20:58.190949 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:20:58.190956 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-02-09 06:20:58.190964 | orchestrator | (): '5fc7f085-8b47-9592-2585-000000000297' 2026-02-09 06:20:58.191007 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.8", "quorum_status", "--format", "json"], "delta": "0:05:00.275331", "end": "2026-02-09 06:20:57.922808", "msg": "non-zero return code", "rc": 1, "start": "2026-02-09 06:15:57.647477", "stderr": "2026-02-09T06:20:57.906+0000 77325c727640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-02-09T06:20:57.906+0000 77325c727640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-02-09 06:21:02.262317 | orchestrator | 2026-02-09 06:21:02 | INFO  | Task 91b267d3-c128-496d-96ef-d880089ca9c2 (ceph-rolling_update) was prepared for execution. 2026-02-09 06:21:02.263181 | orchestrator | 2026-02-09 06:21:02 | INFO  | It takes a moment until task 91b267d3-c128-496d-96ef-d880089ca9c2 (ceph-rolling_update) has been started and output is visible here. 2026-02-09 06:22:03.608164 | orchestrator | 2026-02-09 06:22:03.608266 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-02-09 06:22:03.608279 | orchestrator | Monday 09 February 2026 06:20:58 +0000 (0:31:19.014) 0:36:24.817 ******* 2026-02-09 06:22:03.608306 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:22:03.608313 | orchestrator | 2026-02-09 06:22:03.608320 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-02-09 06:22:03.608328 | orchestrator | Monday 09 February 2026 06:20:58 +0000 (0:00:00.768) 0:36:25.586 ******* 2026-02-09 06:22:03.608334 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:22:03.608340 | orchestrator | 2026-02-09 06:22:03.608346 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-02-09 06:22:03.608353 | orchestrator | Monday 09 February 2026 06:20:59 +0000 (0:00:01.041) 0:36:26.627 ******* 2026-02-09 06:22:03.608359 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-02-09 06:22:03.608380 | orchestrator | (): '5fc7f085-8b47-9592-2585-0000000002a2' 2026-02-09 06:22:03.608392 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-02-09 06:22:03.608396 | orchestrator | 2026-02-09 06:22:03.608400 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 06:22:03.608405 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 06:22:03.608409 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-09 06:22:03.608413 | orchestrator | testbed-node-0 : ok=121  changed=10  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-02-09 06:22:03.608418 | orchestrator | testbed-node-1 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-09 06:22:03.608422 | orchestrator | testbed-node-2 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-09 06:22:03.608426 | orchestrator | testbed-node-3 : ok=33  changed=2  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-02-09 06:22:03.608430 | orchestrator | testbed-node-4 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-09 06:22:03.608434 | orchestrator | testbed-node-5 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-09 06:22:03.608437 | orchestrator | 2026-02-09 06:22:03.608441 | orchestrator | 2026-02-09 06:22:03.608445 | orchestrator | 2026-02-09 06:22:03.608449 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 06:22:03.608453 | orchestrator | Monday 09 February 2026 06:21:01 +0000 (0:00:01.576) 0:36:28.204 ******* 2026-02-09 06:22:03.608456 | orchestrator | =============================================================================== 2026-02-09 06:22:03.608460 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1879.01s 2026-02-09 06:22:03.608464 | orchestrator | Gather and delegate facts ---------------------------------------------- 33.20s 2026-02-09 06:22:03.608467 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.24s 2026-02-09 06:22:03.608471 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 11.67s 2026-02-09 06:22:03.608475 | orchestrator | Set cluster configs ----------------------------------------------------- 9.46s 2026-02-09 06:22:03.608479 | orchestrator | ceph-mon : Set cluster configs ------------------------------------------ 8.89s 2026-02-09 06:22:03.608482 | orchestrator | ceph-infra : Update cache for Debian based OSs -------------------------- 7.29s 2026-02-09 06:22:03.608486 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 5.81s 2026-02-09 06:22:03.608494 | orchestrator | Gather facts ------------------------------------------------------------ 4.51s 2026-02-09 06:22:03.608498 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 3.15s 2026-02-09 06:22:03.608501 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 2.85s 2026-02-09 06:22:03.608505 | orchestrator | Stop ceph mon ----------------------------------------------------------- 2.82s 2026-02-09 06:22:03.608509 | orchestrator | ceph-infra : Add logrotate configuration -------------------------------- 2.44s 2026-02-09 06:22:03.608513 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.42s 2026-02-09 06:22:03.608516 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 2.33s 2026-02-09 06:22:03.608520 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.27s 2026-02-09 06:22:03.608524 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 2.13s 2026-02-09 06:22:03.608527 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 2.11s 2026-02-09 06:22:03.608532 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 1.96s 2026-02-09 06:22:03.608547 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.94s 2026-02-09 06:22:03.608551 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-09 06:22:03.608555 | orchestrator | 2.16.14 2026-02-09 06:22:03.608559 | orchestrator | 2026-02-09 06:22:03.608563 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-09 06:22:03.608567 | orchestrator | 2026-02-09 06:22:03.608570 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-09 06:22:03.608574 | orchestrator | Monday 09 February 2026 06:21:09 +0000 (0:00:01.409) 0:00:01.409 ******* 2026-02-09 06:22:03.608578 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-09 06:22:03.608581 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-09 06:22:03.608588 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-09 06:22:03.608591 | orchestrator | skipping: [localhost] 2026-02-09 06:22:03.608595 | orchestrator | 2026-02-09 06:22:03.608599 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-09 06:22:03.608603 | orchestrator | 2026-02-09 06:22:03.608606 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-09 06:22:03.608610 | orchestrator | Monday 09 February 2026 06:21:11 +0000 (0:00:02.307) 0:00:03.717 ******* 2026-02-09 06:22:03.608614 | orchestrator | ok: [testbed-node-0] => { 2026-02-09 06:22:03.608618 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 06:22:03.608622 | orchestrator | } 2026-02-09 06:22:03.608626 | orchestrator | ok: [testbed-node-1] => { 2026-02-09 06:22:03.608630 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 06:22:03.608633 | orchestrator | } 2026-02-09 06:22:03.608637 | orchestrator | ok: [testbed-node-2] => { 2026-02-09 06:22:03.608641 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 06:22:03.608644 | orchestrator | } 2026-02-09 06:22:03.608648 | orchestrator | ok: [testbed-node-3] => { 2026-02-09 06:22:03.608652 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 06:22:03.608656 | orchestrator | } 2026-02-09 06:22:03.608659 | orchestrator | ok: [testbed-node-4] => { 2026-02-09 06:22:03.608663 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 06:22:03.608667 | orchestrator | } 2026-02-09 06:22:03.608671 | orchestrator | ok: [testbed-node-5] => { 2026-02-09 06:22:03.608674 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 06:22:03.608678 | orchestrator | } 2026-02-09 06:22:03.608682 | orchestrator | ok: [testbed-manager] => { 2026-02-09 06:22:03.608686 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-09 06:22:03.608692 | orchestrator | } 2026-02-09 06:22:03.608696 | orchestrator | 2026-02-09 06:22:03.608700 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-09 06:22:03.608703 | orchestrator | Monday 09 February 2026 06:21:16 +0000 (0:00:04.906) 0:00:08.624 ******* 2026-02-09 06:22:03.608708 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:22:03.608713 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:22:03.608717 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:22:03.608722 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:22:03.608726 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:22:03.608731 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:22:03.608735 | orchestrator | ok: [testbed-manager] 2026-02-09 06:22:03.608740 | orchestrator | 2026-02-09 06:22:03.608744 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-09 06:22:03.608749 | orchestrator | Monday 09 February 2026 06:21:22 +0000 (0:00:06.053) 0:00:14.677 ******* 2026-02-09 06:22:03.608753 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:22:03.608758 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 06:22:03.608762 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 06:22:03.608767 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 06:22:03.608772 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 06:22:03.608776 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-09 06:22:03.608780 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 06:22:03.608785 | orchestrator | 2026-02-09 06:22:03.608789 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-09 06:22:03.608794 | orchestrator | Monday 09 February 2026 06:21:55 +0000 (0:00:32.987) 0:00:47.664 ******* 2026-02-09 06:22:03.608798 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:22:03.608803 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:22:03.608807 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:22:03.608811 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:22:03.608816 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:22:03.608820 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:22:03.608825 | orchestrator | ok: [testbed-manager] 2026-02-09 06:22:03.608829 | orchestrator | 2026-02-09 06:22:03.608833 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-09 06:22:03.608838 | orchestrator | Monday 09 February 2026 06:21:57 +0000 (0:00:02.042) 0:00:49.706 ******* 2026-02-09 06:22:03.608842 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-09 06:22:03.608847 | orchestrator | 2026-02-09 06:22:03.608851 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-09 06:22:03.608856 | orchestrator | Monday 09 February 2026 06:22:00 +0000 (0:00:02.957) 0:00:52.664 ******* 2026-02-09 06:22:03.608860 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:22:03.608865 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:22:03.608869 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:22:03.608874 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:22:03.608878 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:22:03.608885 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:22:32.148575 | orchestrator | ok: [testbed-manager] 2026-02-09 06:22:32.148699 | orchestrator | 2026-02-09 06:22:32.148726 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-09 06:22:32.148741 | orchestrator | Monday 09 February 2026 06:22:03 +0000 (0:00:02.719) 0:00:55.384 ******* 2026-02-09 06:22:32.148752 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:22:32.148763 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:22:32.148774 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:22:32.148810 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:22:32.148822 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:22:32.148833 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:22:32.148843 | orchestrator | ok: [testbed-manager] 2026-02-09 06:22:32.148854 | orchestrator | 2026-02-09 06:22:32.148865 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-09 06:22:32.148876 | orchestrator | Monday 09 February 2026 06:22:05 +0000 (0:00:01.926) 0:00:57.310 ******* 2026-02-09 06:22:32.148887 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:22:32.148913 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:22:32.148924 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:22:32.148934 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:22:32.148945 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:22:32.149028 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:22:32.149040 | orchestrator | ok: [testbed-manager] 2026-02-09 06:22:32.149050 | orchestrator | 2026-02-09 06:22:32.149061 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-09 06:22:32.149072 | orchestrator | Monday 09 February 2026 06:22:08 +0000 (0:00:02.704) 0:01:00.015 ******* 2026-02-09 06:22:32.149082 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:22:32.149093 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:22:32.149103 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:22:32.149114 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:22:32.149124 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:22:32.149134 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:22:32.149145 | orchestrator | ok: [testbed-manager] 2026-02-09 06:22:32.149156 | orchestrator | 2026-02-09 06:22:32.149166 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-09 06:22:32.149177 | orchestrator | Monday 09 February 2026 06:22:10 +0000 (0:00:01.854) 0:01:01.870 ******* 2026-02-09 06:22:32.149188 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:22:32.149199 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:22:32.149209 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:22:32.149219 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:22:32.149230 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:22:32.149240 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:22:32.149251 | orchestrator | ok: [testbed-manager] 2026-02-09 06:22:32.149261 | orchestrator | 2026-02-09 06:22:32.149272 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-09 06:22:32.149283 | orchestrator | Monday 09 February 2026 06:22:12 +0000 (0:00:02.294) 0:01:04.165 ******* 2026-02-09 06:22:32.149293 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:22:32.149304 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:22:32.149314 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:22:32.149325 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:22:32.149335 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:22:32.149346 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:22:32.149356 | orchestrator | ok: [testbed-manager] 2026-02-09 06:22:32.149367 | orchestrator | 2026-02-09 06:22:32.149377 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-09 06:22:32.149389 | orchestrator | Monday 09 February 2026 06:22:14 +0000 (0:00:02.007) 0:01:06.172 ******* 2026-02-09 06:22:32.149400 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:22:32.149412 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:22:32.149423 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:22:32.149433 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:22:32.149444 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:22:32.149454 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:22:32.149465 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:22:32.149475 | orchestrator | 2026-02-09 06:22:32.149486 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-09 06:22:32.149497 | orchestrator | Monday 09 February 2026 06:22:16 +0000 (0:00:02.251) 0:01:08.423 ******* 2026-02-09 06:22:32.149507 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:22:32.149518 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:22:32.149529 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:22:32.149548 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:22:32.149559 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:22:32.149569 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:22:32.149580 | orchestrator | ok: [testbed-manager] 2026-02-09 06:22:32.149590 | orchestrator | 2026-02-09 06:22:32.149601 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-09 06:22:32.149612 | orchestrator | Monday 09 February 2026 06:22:18 +0000 (0:00:02.092) 0:01:10.516 ******* 2026-02-09 06:22:32.149622 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:22:32.149633 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 06:22:32.149644 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 06:22:32.149655 | orchestrator | 2026-02-09 06:22:32.149665 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-09 06:22:32.149676 | orchestrator | Monday 09 February 2026 06:22:20 +0000 (0:00:01.695) 0:01:12.211 ******* 2026-02-09 06:22:32.149686 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:22:32.149697 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:22:32.149707 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:22:32.149717 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:22:32.149728 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:22:32.149738 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:22:32.149749 | orchestrator | ok: [testbed-manager] 2026-02-09 06:22:32.149760 | orchestrator | 2026-02-09 06:22:32.149770 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-09 06:22:32.149781 | orchestrator | Monday 09 February 2026 06:22:22 +0000 (0:00:02.370) 0:01:14.581 ******* 2026-02-09 06:22:32.149792 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:22:32.149803 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 06:22:32.149813 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 06:22:32.149824 | orchestrator | 2026-02-09 06:22:32.149835 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-09 06:22:32.149863 | orchestrator | Monday 09 February 2026 06:22:26 +0000 (0:00:03.441) 0:01:18.023 ******* 2026-02-09 06:22:32.149875 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 06:22:32.149886 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 06:22:32.149897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 06:22:32.149908 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:22:32.149918 | orchestrator | 2026-02-09 06:22:32.149929 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-09 06:22:32.149940 | orchestrator | Monday 09 February 2026 06:22:27 +0000 (0:00:01.473) 0:01:19.497 ******* 2026-02-09 06:22:32.149975 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-09 06:22:32.149989 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-09 06:22:32.150001 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-09 06:22:32.150012 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:22:32.150082 | orchestrator | 2026-02-09 06:22:32.150094 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-09 06:22:32.150105 | orchestrator | Monday 09 February 2026 06:22:29 +0000 (0:00:01.988) 0:01:21.486 ******* 2026-02-09 06:22:32.150118 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 06:22:32.150140 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 06:22:32.150152 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 06:22:32.150163 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:22:32.150174 | orchestrator | 2026-02-09 06:22:32.150185 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-09 06:22:32.150195 | orchestrator | Monday 09 February 2026 06:22:30 +0000 (0:00:01.204) 0:01:22.690 ******* 2026-02-09 06:22:32.150208 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2317507ded62', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-09 06:22:23.448693', 'end': '2026-02-09 06:22:23.496301', 'delta': '0:00:00.047608', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2317507ded62'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-09 06:22:32.150233 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'ab15bd6989cf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-09 06:22:24.378224', 'end': '2026-02-09 06:22:24.434450', 'delta': '0:00:00.056226', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ab15bd6989cf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-09 06:23:00.620997 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '08d9b4f0b230', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-09 06:22:24.957739', 'end': '2026-02-09 06:22:25.001187', 'delta': '0:00:00.043448', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['08d9b4f0b230'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-09 06:23:00.621075 | orchestrator | 2026-02-09 06:23:00.621083 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-09 06:23:00.621103 | orchestrator | Monday 09 February 2026 06:22:32 +0000 (0:00:01.234) 0:01:23.925 ******* 2026-02-09 06:23:00.621107 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:23:00.621112 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:23:00.621116 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:23:00.621120 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:23:00.621123 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:23:00.621127 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:23:00.621131 | orchestrator | ok: [testbed-manager] 2026-02-09 06:23:00.621135 | orchestrator | 2026-02-09 06:23:00.621139 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-09 06:23:00.621142 | orchestrator | Monday 09 February 2026 06:22:34 +0000 (0:00:02.158) 0:01:26.083 ******* 2026-02-09 06:23:00.621146 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:00.621151 | orchestrator | 2026-02-09 06:23:00.621154 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-09 06:23:00.621158 | orchestrator | Monday 09 February 2026 06:22:35 +0000 (0:00:01.245) 0:01:27.329 ******* 2026-02-09 06:23:00.621162 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:23:00.621166 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:23:00.621170 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:23:00.621174 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:23:00.621178 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:23:00.621182 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:23:00.621186 | orchestrator | ok: [testbed-manager] 2026-02-09 06:23:00.621190 | orchestrator | 2026-02-09 06:23:00.621194 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-09 06:23:00.621198 | orchestrator | Monday 09 February 2026 06:22:37 +0000 (0:00:02.131) 0:01:29.461 ******* 2026-02-09 06:23:00.621202 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:23:00.621206 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-09 06:23:00.621210 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-09 06:23:00.621214 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-09 06:23:00.621218 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-09 06:23:00.621221 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-09 06:23:00.621225 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-09 06:23:00.621229 | orchestrator | 2026-02-09 06:23:00.621233 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 06:23:00.621236 | orchestrator | Monday 09 February 2026 06:22:40 +0000 (0:00:03.314) 0:01:32.775 ******* 2026-02-09 06:23:00.621240 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:23:00.621244 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:23:00.621248 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:23:00.621251 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:23:00.621255 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:23:00.621258 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:23:00.621262 | orchestrator | ok: [testbed-manager] 2026-02-09 06:23:00.621266 | orchestrator | 2026-02-09 06:23:00.621270 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-09 06:23:00.621273 | orchestrator | Monday 09 February 2026 06:22:43 +0000 (0:00:02.156) 0:01:34.932 ******* 2026-02-09 06:23:00.621277 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:00.621281 | orchestrator | 2026-02-09 06:23:00.621285 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-09 06:23:00.621289 | orchestrator | Monday 09 February 2026 06:22:44 +0000 (0:00:01.136) 0:01:36.068 ******* 2026-02-09 06:23:00.621293 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:00.621296 | orchestrator | 2026-02-09 06:23:00.621300 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 06:23:00.621304 | orchestrator | Monday 09 February 2026 06:22:45 +0000 (0:00:01.318) 0:01:37.387 ******* 2026-02-09 06:23:00.621308 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:00.621315 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:00.621319 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:00.621322 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:00.621326 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:00.621330 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:00.621333 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:00.621337 | orchestrator | 2026-02-09 06:23:00.621341 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-09 06:23:00.621344 | orchestrator | Monday 09 February 2026 06:22:47 +0000 (0:00:02.262) 0:01:39.649 ******* 2026-02-09 06:23:00.621348 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:00.621352 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:00.621355 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:00.621359 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:00.621363 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:00.621366 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:00.621370 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:00.621374 | orchestrator | 2026-02-09 06:23:00.621378 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-09 06:23:00.621391 | orchestrator | Monday 09 February 2026 06:22:49 +0000 (0:00:01.948) 0:01:41.598 ******* 2026-02-09 06:23:00.621396 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:00.621400 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:00.621404 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:00.621408 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:00.621412 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:00.621416 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:00.621420 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:00.621424 | orchestrator | 2026-02-09 06:23:00.621432 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-09 06:23:00.621436 | orchestrator | Monday 09 February 2026 06:22:52 +0000 (0:00:02.297) 0:01:43.895 ******* 2026-02-09 06:23:00.621440 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:00.621444 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:00.621448 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:00.621452 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:00.621456 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:00.621460 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:00.621464 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:00.621468 | orchestrator | 2026-02-09 06:23:00.621472 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-09 06:23:00.621476 | orchestrator | Monday 09 February 2026 06:22:54 +0000 (0:00:02.022) 0:01:45.918 ******* 2026-02-09 06:23:00.621480 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:00.621484 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:00.621488 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:00.621492 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:00.621496 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:00.621500 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:00.621504 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:00.621508 | orchestrator | 2026-02-09 06:23:00.621513 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-09 06:23:00.621518 | orchestrator | Monday 09 February 2026 06:22:56 +0000 (0:00:02.235) 0:01:48.154 ******* 2026-02-09 06:23:00.621523 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:00.621528 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:00.621533 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:00.621538 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:00.621542 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:00.621547 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:00.621552 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:00.621556 | orchestrator | 2026-02-09 06:23:00.621561 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-09 06:23:00.621570 | orchestrator | Monday 09 February 2026 06:22:58 +0000 (0:00:01.879) 0:01:50.033 ******* 2026-02-09 06:23:00.621575 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:00.621580 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:00.621585 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:00.621589 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:00.621594 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:00.621598 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:00.621603 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:00.621608 | orchestrator | 2026-02-09 06:23:00.621613 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-09 06:23:00.621618 | orchestrator | Monday 09 February 2026 06:23:00 +0000 (0:00:02.213) 0:01:52.247 ******* 2026-02-09 06:23:00.621624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.621630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.621635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.621642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-54-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 06:23:00.621653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.774487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.774590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.774636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e53c6ccf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 06:23:00.774654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.774667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.774679 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:00.774711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.774724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.774743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.774756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 06:23:00.774768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.774780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.774791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:00.774875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '05884397', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 06:23:01.147448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.147572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.147590 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:01.147604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.147615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.147625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.147637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 06:23:01.147666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.147677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.147709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.147742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '669d190d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 06:23:01.147755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.147766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.147776 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:01.147791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.147818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d', 'dm-uuid-LVM-5Oms0YhgvCVrWp80wJ4aA96yxcElodY708xUFI15dbkcdnHIR6L7mBfIOccNLzlf'], 'uuids': ['92075616-c4e2-4925-8a52-781f81959675'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04e8f271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf']}})  2026-02-09 06:23:01.147837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa', 'scsi-SQEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '96ef4066', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 06:23:01.277038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UekHwl-BrrL-tQwo-R3UW-N6L4-qGv4-ixmNDb', 'scsi-0QEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d', 'scsi-SQEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6e78f5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b']}})  2026-02-09 06:23:01.277137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.277155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.277169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 06:23:01.277198 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.277232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ', 'dm-uuid-CRYPT-LUKS2-de5e11f498514777b9f5c3124a9d07d1-0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 06:23:01.277244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.277289 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b', 'dm-uuid-LVM-0WjeRAA0lqf3cpEn6bug4xs5UGMazLjB0h01y39wS0A1Owicu3DkC9MW8cY3xQUQ'], 'uuids': ['de5e11f4-9851-4777-b9f5-c3124a9d07d1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6e78f5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ']}})  2026-02-09 06:23:01.277313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-DXOpal-X33W-ipPf-IHHU-xTym-5svh-1uUmz7', 'scsi-0QEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4', 'scsi-SQEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04e8f271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d']}})  2026-02-09 06:23:01.277333 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.277383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62fae712', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 06:23:01.277423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.277457 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.504255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.504365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf', 'dm-uuid-CRYPT-LUKS2-92075616c4e249258a52781f81959675-08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 06:23:01.504401 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9', 'dm-uuid-LVM-3CHn6ZP2pM8HpEDxSzeilwVQRF6lfj6OM8VSybDQwMAeXi61wvDItRKk6IUvThlx'], 'uuids': ['50c6b72f-c737-47f8-b44b-c4ff80acfe27'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca63f30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx']}})  2026-02-09 06:23:01.504453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24', 'scsi-SQEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'accd83ee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 06:23:01.504466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GwhUsL-bhJV-LTOj-ZPeb-I83T-YRPV-54WlOk', 'scsi-0QEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509', 'scsi-SQEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '31e706da', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3']}})  2026-02-09 06:23:01.504477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.504488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.504518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 06:23:01.504529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.504540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm', 'dm-uuid-CRYPT-LUKS2-6090f72cb53d48828358e477240bcd4c-qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 06:23:01.504556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.504571 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3', 'dm-uuid-LVM-28EU5fYWgLFVVTr1j10NPpT02LXZ3m2dqNBTokCpiFfT2ODyZTZ76Gse0HWZzEjm'], 'uuids': ['6090f72c-b53d-4882-8358-e477240bcd4c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '31e706da', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm']}})  2026-02-09 06:23:01.504583 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:01.504595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-TEtRPa-KFlO-eA6E-SkhX-jKKT-2BmX-PRBRTw', 'scsi-0QEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c', 'scsi-SQEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca63f30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9']}})  2026-02-09 06:23:01.504605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.504627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9ffd840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 06:23:01.595288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.595384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.595400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx', 'dm-uuid-CRYPT-LUKS2-50c6b72fc73747f8b44bc4ff80acfe27-M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 06:23:01.595415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.595428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6', 'dm-uuid-LVM-UtcmtJOb91d0iC1jVKeu7Rh960XYKnyIcb9DX8DrOUkJ6Npc5MMds8BTnO00gFXN'], 'uuids': ['edbf4323-e023-483f-8845-3d4d18b95c7e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1815f4db', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN']}})  2026-02-09 06:23:01.595442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0', 'scsi-SQEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b185251', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 06:23:01.595476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jvH3Zw-djyF-WIKe-T88H-f7IR-FEUt-vCkV4E', 'scsi-0QEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717', 'scsi-SQEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad4d2000', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92']}})  2026-02-09 06:23:01.595512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.595525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.595537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 06:23:01.595549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.595560 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0', 'dm-uuid-CRYPT-LUKS2-82e0402e2657452988ce543ce32f645b-p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 06:23:01.595572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.595584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92', 'dm-uuid-LVM-SZPyknUsbhfLaF3x5K31ctP0vcigu1Pwp97ku36NfSW31vos0Gj86u7MmrIxN6I0'], 'uuids': ['82e0402e-2657-4529-88ce-543ce32f645b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad4d2000', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0']}})  2026-02-09 06:23:01.595624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-nj2fwl-jxqG-fYtS-q2di-jVVW-fVes-RibCJ0', 'scsi-0QEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46', 'scsi-SQEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1815f4db', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6']}})  2026-02-09 06:23:01.746262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.746363 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:01.746384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f810d870', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 06:23:01.746400 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.746454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.746477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN', 'dm-uuid-CRYPT-LUKS2-edbf4323e023483f88453d4d18b95c7e-cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-09 06:23:01.746500 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:01.746550 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.746563 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.746575 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.746591 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-25-19-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 06:23:01.746610 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.746628 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.746651 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:01.746687 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '07b5cadf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 06:23:03.057230 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:03.057336 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:23:03.057353 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:03.057366 | orchestrator | 2026-02-09 06:23:03.057376 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-09 06:23:03.057387 | orchestrator | Monday 09 February 2026 06:23:02 +0000 (0:00:02.431) 0:01:54.678 ******* 2026-02-09 06:23:03.057422 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.057436 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.057446 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.057471 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-54-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.057499 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.057511 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.057528 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.057546 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e53c6ccf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.057568 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.321359 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.321449 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.321457 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.321462 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.321479 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-55-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.321493 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.321509 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.321518 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.321530 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '05884397', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1', 'scsi-SQEMU_QEMU_HARDDISK_05884397-9613-4241-8546-48042913fb5f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.321537 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:03.321544 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.321553 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.588937 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.589052 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.589061 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.589083 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.589090 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.589097 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.589139 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.589154 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '669d190d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1', 'scsi-SQEMU_QEMU_HARDDISK_669d190d-3883-4e68-b86c-8247f53b6ca7-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.589163 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.589169 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.589181 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:03.589195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.607175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d', 'dm-uuid-LVM-5Oms0YhgvCVrWp80wJ4aA96yxcElodY708xUFI15dbkcdnHIR6L7mBfIOccNLzlf'], 'uuids': ['92075616-c4e2-4925-8a52-781f81959675'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '04e8f271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.607282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa', 'scsi-SQEMU_QEMU_HARDDISK_96ef4066-b91b-4665-8e67-19d3f9b9c2aa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '96ef4066', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.607297 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-UekHwl-BrrL-tQwo-R3UW-N6L4-qGv4-ixmNDb', 'scsi-0QEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d', 'scsi-SQEMU_QEMU_HARDDISK_e6e78f5c-a05f-4a2f-8630-adfade66484d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6e78f5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.607328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.607339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.607363 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.607374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.607388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ', 'dm-uuid-CRYPT-LUKS2-de5e11f498514777b9f5c3124a9d07d1-0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.607397 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.607414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--709cc28b--6adb--555a--83e9--344e81441f7b-osd--block--709cc28b--6adb--555a--83e9--344e81441f7b', 'dm-uuid-LVM-0WjeRAA0lqf3cpEn6bug4xs5UGMazLjB0h01y39wS0A1Owicu3DkC9MW8cY3xQUQ'], 'uuids': ['de5e11f4-9851-4777-b9f5-c3124a9d07d1'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6e78f5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['0h01y3-9wS0-A1Ow-icu3-DkC9-MW8c-Y3xQUQ']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.607439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-DXOpal-X33W-ipPf-IHHU-xTym-5svh-1uUmz7', 'scsi-0QEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4', 'scsi-SQEMU_QEMU_HARDDISK_04e8f271-95dc-41c9-84a5-801ade107da4'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '04e8f271', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--244f969e--c6c5--5568--af21--d52fe589178d-osd--block--244f969e--c6c5--5568--af21--d52fe589178d']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.983734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.983828 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:03.983864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '62fae712', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1', 'scsi-SQEMU_QEMU_HARDDISK_62fae712-754c-4f2b-a4e9-8035d76f7af8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.983903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.983930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.983942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf', 'dm-uuid-CRYPT-LUKS2-92075616c4e249258a52781f81959675-08xUFI-15db-kcdn-HIR6-L7mB-fIOc-cNLzlf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.984031 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.984044 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9', 'dm-uuid-LVM-3CHn6ZP2pM8HpEDxSzeilwVQRF6lfj6OM8VSybDQwMAeXi61wvDItRKk6IUvThlx'], 'uuids': ['50c6b72f-c737-47f8-b44b-c4ff80acfe27'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca63f30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.984063 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24', 'scsi-SQEMU_QEMU_HARDDISK_accd83ee-77ec-4f4c-88d5-19cec15f3e24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'accd83ee', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:03.984074 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:03.984091 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GwhUsL-bhJV-LTOj-ZPeb-I83T-YRPV-54WlOk', 'scsi-0QEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509', 'scsi-SQEMU_QEMU_HARDDISK_31e706da-f17a-4e24-9ea1-628640491509'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '31e706da', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.098298 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.098404 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.098417 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.098447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.098457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm', 'dm-uuid-CRYPT-LUKS2-6090f72cb53d48828358e477240bcd4c-qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.098467 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.098492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.098507 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6', 'dm-uuid-LVM-UtcmtJOb91d0iC1jVKeu7Rh960XYKnyIcb9DX8DrOUkJ6Npc5MMds8BTnO00gFXN'], 'uuids': ['edbf4323-e023-483f-8845-3d4d18b95c7e'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '1815f4db', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.098517 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0', 'scsi-SQEMU_QEMU_HARDDISK_1b185251-3d7a-4eb0-a8d7-b34a7a2bddd0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1b185251', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.098536 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--2c0211a0--e551--5710--9a38--56737a7f5fb3-osd--block--2c0211a0--e551--5710--9a38--56737a7f5fb3', 'dm-uuid-LVM-28EU5fYWgLFVVTr1j10NPpT02LXZ3m2dqNBTokCpiFfT2ODyZTZ76Gse0HWZzEjm'], 'uuids': ['6090f72c-b53d-4882-8358-e477240bcd4c'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '31e706da', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qNBTok-CpiF-fT2O-DyZT-Z76G-se0H-WZzEjm']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.098551 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-jvH3Zw-djyF-WIKe-T88H-f7IR-FEUt-vCkV4E', 'scsi-0QEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717', 'scsi-SQEMU_QEMU_HARDDISK_ad4d2000-db3f-4cfd-be49-267ba7004717'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad4d2000', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.194399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-TEtRPa-KFlO-eA6E-SkhX-jKKT-2BmX-PRBRTw', 'scsi-0QEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c', 'scsi-SQEMU_QEMU_HARDDISK_aca63f30-83ce-4e61-8910-3b8ba5d1369c'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aca63f30', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--84c19404--a9f4--50a5--b230--c81d6fb6b3c9-osd--block--84c19404--a9f4--50a5--b230--c81d6fb6b3c9']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.194525 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.194571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.194583 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-52-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.194591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.194613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.194627 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9ffd840', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e9ffd840-8794-4a3d-8eb0-6a90290484dd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.194644 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0', 'dm-uuid-CRYPT-LUKS2-82e0402e2657452988ce543ce32f645b-p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.194652 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.194666 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.500378 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.500461 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--46be6a4f--1579--5910--a72e--9190b5238c92-osd--block--46be6a4f--1579--5910--a72e--9190b5238c92', 'dm-uuid-LVM-SZPyknUsbhfLaF3x5K31ctP0vcigu1Pwp97ku36NfSW31vos0Gj86u7MmrIxN6I0'], 'uuids': ['82e0402e-2657-4529-88ce-543ce32f645b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad4d2000', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['p97ku3-6NfS-W31v-os0G-j86u-7Mmr-IxN6I0']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.500483 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx', 'dm-uuid-CRYPT-LUKS2-50c6b72fc73747f8b44bc4ff80acfe27-M8VSyb-DQwM-AeXi-61wv-DItR-Kk6I-UvThlx'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.500488 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-nj2fwl-jxqG-fYtS-q2di-jVVW-fVes-RibCJ0', 'scsi-0QEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46', 'scsi-SQEMU_QEMU_HARDDISK_1815f4db-c191-49bf-971c-f1dbc8705b46'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1815f4db', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fca1079b--480c--5ada--8652--888828a580b6-osd--block--fca1079b--480c--5ada--8652--888828a580b6']}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.500495 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.500516 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f810d870', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1', 'scsi-SQEMU_QEMU_HARDDISK_f810d870-b1b5-47b5-8aca-c0a0a7072d9d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.500525 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.500529 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.500533 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN', 'dm-uuid-CRYPT-LUKS2-edbf4323e023483f88453d4d18b95c7e-cb9DX8-DrOU-kJ6N-pc5M-Mds8-BTnO-00gFXN'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.500541 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:04.507846 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:04.507927 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.508046 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.508055 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.508061 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-25-19-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.508067 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.508072 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.508087 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.508103 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '07b5cadf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part16', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part14', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part15', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part1', 'scsi-SQEMU_QEMU_HARDDISK_07b5cadf-5aeb-4e31-9bf7-fe940ba942fa-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.508110 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.508115 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:23:04.508120 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:04.508125 | orchestrator | 2026-02-09 06:23:04.508133 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-09 06:23:38.498904 | orchestrator | Monday 09 February 2026 06:23:05 +0000 (0:00:02.661) 0:01:57.340 ******* 2026-02-09 06:23:38.499072 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:23:38.499084 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:23:38.499090 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:23:38.499096 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:23:38.499102 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:23:38.499109 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:23:38.499115 | orchestrator | ok: [testbed-manager] 2026-02-09 06:23:38.499120 | orchestrator | 2026-02-09 06:23:38.499128 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-09 06:23:38.499134 | orchestrator | Monday 09 February 2026 06:23:08 +0000 (0:00:02.700) 0:02:00.041 ******* 2026-02-09 06:23:38.499140 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:23:38.499146 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:23:38.499152 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:23:38.499158 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:23:38.499163 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:23:38.499169 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:23:38.499175 | orchestrator | ok: [testbed-manager] 2026-02-09 06:23:38.499181 | orchestrator | 2026-02-09 06:23:38.499205 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 06:23:38.499212 | orchestrator | Monday 09 February 2026 06:23:10 +0000 (0:00:02.012) 0:02:02.053 ******* 2026-02-09 06:23:38.499217 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:23:38.499223 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:23:38.499229 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:23:38.499235 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:23:38.499241 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:38.499247 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:23:38.499253 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:23:38.499259 | orchestrator | 2026-02-09 06:23:38.499265 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 06:23:38.499271 | orchestrator | Monday 09 February 2026 06:23:12 +0000 (0:00:02.573) 0:02:04.627 ******* 2026-02-09 06:23:38.499277 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:38.499283 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:38.499289 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:38.499295 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:38.499300 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:38.499306 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:38.499312 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:38.499317 | orchestrator | 2026-02-09 06:23:38.499323 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 06:23:38.499329 | orchestrator | Monday 09 February 2026 06:23:14 +0000 (0:00:01.885) 0:02:06.512 ******* 2026-02-09 06:23:38.499335 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:38.499341 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:38.499347 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:38.499352 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:38.499358 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:38.499364 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:38.499370 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-09 06:23:38.499377 | orchestrator | 2026-02-09 06:23:38.499387 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 06:23:38.499396 | orchestrator | Monday 09 February 2026 06:23:17 +0000 (0:00:02.787) 0:02:09.300 ******* 2026-02-09 06:23:38.499406 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:38.499415 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:38.499425 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:38.499434 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:38.499444 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:38.499455 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:38.499462 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:38.499496 | orchestrator | 2026-02-09 06:23:38.499506 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-09 06:23:38.499516 | orchestrator | Monday 09 February 2026 06:23:19 +0000 (0:00:01.933) 0:02:11.234 ******* 2026-02-09 06:23:38.499526 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:23:38.499535 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-09 06:23:38.499544 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-09 06:23:38.499553 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-09 06:23:38.499562 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-09 06:23:38.499572 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-09 06:23:38.499581 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-09 06:23:38.499590 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-09 06:23:38.499598 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-09 06:23:38.499608 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-09 06:23:38.499617 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-09 06:23:38.499625 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-09 06:23:38.499638 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-09 06:23:38.499647 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-09 06:23:38.499657 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-09 06:23:38.499667 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-09 06:23:38.499677 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-09 06:23:38.499687 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-09 06:23:38.499697 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-09 06:23:38.499707 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-09 06:23:38.499717 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-09 06:23:38.499726 | orchestrator | 2026-02-09 06:23:38.499737 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-09 06:23:38.499747 | orchestrator | Monday 09 February 2026 06:23:22 +0000 (0:00:03.557) 0:02:14.792 ******* 2026-02-09 06:23:38.499757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 06:23:38.499768 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 06:23:38.499799 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 06:23:38.499810 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:38.499819 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-09 06:23:38.499829 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-09 06:23:38.499840 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-09 06:23:38.499851 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:38.499861 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-09 06:23:38.499872 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-09 06:23:38.499883 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-09 06:23:38.499893 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:38.499904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-09 06:23:38.499915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-09 06:23:38.499926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-09 06:23:38.499937 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:38.500010 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-09 06:23:38.500023 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-09 06:23:38.500034 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-09 06:23:38.500046 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:38.500057 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-09 06:23:38.500077 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-09 06:23:38.500087 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-09 06:23:38.500097 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:38.500106 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-09 06:23:38.500116 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-09 06:23:38.500126 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-09 06:23:38.500135 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:38.500145 | orchestrator | 2026-02-09 06:23:38.500155 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-09 06:23:38.500165 | orchestrator | Monday 09 February 2026 06:23:25 +0000 (0:00:02.367) 0:02:17.159 ******* 2026-02-09 06:23:38.500175 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:23:38.500184 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:23:38.500195 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:23:38.500205 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:23:38.500217 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 06:23:38.500228 | orchestrator | 2026-02-09 06:23:38.500238 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-09 06:23:38.500251 | orchestrator | Monday 09 February 2026 06:23:27 +0000 (0:00:02.021) 0:02:19.181 ******* 2026-02-09 06:23:38.500260 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:38.500270 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:38.500279 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:38.500289 | orchestrator | 2026-02-09 06:23:38.500298 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-09 06:23:38.500309 | orchestrator | Monday 09 February 2026 06:23:28 +0000 (0:00:01.555) 0:02:20.737 ******* 2026-02-09 06:23:38.500376 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:38.500386 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:38.500397 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:38.500408 | orchestrator | 2026-02-09 06:23:38.500418 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-09 06:23:38.500428 | orchestrator | Monday 09 February 2026 06:23:30 +0000 (0:00:01.537) 0:02:22.274 ******* 2026-02-09 06:23:38.500439 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:38.500450 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:23:38.500461 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:23:38.500472 | orchestrator | 2026-02-09 06:23:38.500482 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-09 06:23:38.500494 | orchestrator | Monday 09 February 2026 06:23:31 +0000 (0:00:01.413) 0:02:23.688 ******* 2026-02-09 06:23:38.500505 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:23:38.500516 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:23:38.500528 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:23:38.500538 | orchestrator | 2026-02-09 06:23:38.500549 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-09 06:23:38.500560 | orchestrator | Monday 09 February 2026 06:23:33 +0000 (0:00:01.472) 0:02:25.161 ******* 2026-02-09 06:23:38.500571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 06:23:38.500584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 06:23:38.500595 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 06:23:38.500606 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:38.500618 | orchestrator | 2026-02-09 06:23:38.500629 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-09 06:23:38.500641 | orchestrator | Monday 09 February 2026 06:23:35 +0000 (0:00:01.714) 0:02:26.875 ******* 2026-02-09 06:23:38.500652 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 06:23:38.500675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 06:23:38.500686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 06:23:38.500697 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:23:38.500707 | orchestrator | 2026-02-09 06:23:38.500717 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-09 06:23:38.500729 | orchestrator | Monday 09 February 2026 06:23:36 +0000 (0:00:01.690) 0:02:28.565 ******* 2026-02-09 06:23:38.500740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-09 06:23:38.500766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-09 06:24:27.706510 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-09 06:24:27.706616 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.706633 | orchestrator | 2026-02-09 06:24:27.706645 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-09 06:24:27.706658 | orchestrator | Monday 09 February 2026 06:23:38 +0000 (0:00:01.707) 0:02:30.273 ******* 2026-02-09 06:24:27.706669 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:24:27.706680 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:24:27.706691 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:24:27.706702 | orchestrator | 2026-02-09 06:24:27.706713 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-09 06:24:27.706724 | orchestrator | Monday 09 February 2026 06:23:39 +0000 (0:00:01.478) 0:02:31.751 ******* 2026-02-09 06:24:27.706735 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-09 06:24:27.706745 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-09 06:24:27.706756 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-09 06:24:27.706767 | orchestrator | 2026-02-09 06:24:27.706777 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-09 06:24:27.706803 | orchestrator | Monday 09 February 2026 06:23:41 +0000 (0:00:01.628) 0:02:33.379 ******* 2026-02-09 06:24:27.706814 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:24:27.706825 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 06:24:27.706837 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 06:24:27.706847 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-09 06:24:27.706858 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 06:24:27.706869 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 06:24:27.706879 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 06:24:27.706890 | orchestrator | 2026-02-09 06:24:27.706901 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-09 06:24:27.706912 | orchestrator | Monday 09 February 2026 06:23:43 +0000 (0:00:02.101) 0:02:35.481 ******* 2026-02-09 06:24:27.706923 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:24:27.706933 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 06:24:27.706969 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 06:24:27.706980 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-09 06:24:27.706991 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 06:24:27.707002 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 06:24:27.707013 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 06:24:27.707023 | orchestrator | 2026-02-09 06:24:27.707034 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-09 06:24:27.707045 | orchestrator | Monday 09 February 2026 06:23:46 +0000 (0:00:03.029) 0:02:38.511 ******* 2026-02-09 06:24:27.707084 | orchestrator | changed: [testbed-node-3] 2026-02-09 06:24:27.707097 | orchestrator | changed: [testbed-node-4] 2026-02-09 06:24:27.707110 | orchestrator | changed: [testbed-node-5] 2026-02-09 06:24:27.707123 | orchestrator | changed: [testbed-manager] 2026-02-09 06:24:27.707135 | orchestrator | changed: [testbed-node-2] 2026-02-09 06:24:27.707148 | orchestrator | changed: [testbed-node-0] 2026-02-09 06:24:27.707161 | orchestrator | changed: [testbed-node-1] 2026-02-09 06:24:27.707173 | orchestrator | 2026-02-09 06:24:27.707185 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-09 06:24:27.707198 | orchestrator | Monday 09 February 2026 06:23:57 +0000 (0:00:11.096) 0:02:49.607 ******* 2026-02-09 06:24:27.707210 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:27.707222 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:27.707234 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:27.707247 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.707260 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:27.707273 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:27.707285 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.707298 | orchestrator | 2026-02-09 06:24:27.707311 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-09 06:24:27.707323 | orchestrator | Monday 09 February 2026 06:23:59 +0000 (0:00:02.122) 0:02:51.730 ******* 2026-02-09 06:24:27.707336 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:27.707348 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:27.707359 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:27.707369 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.707380 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:27.707391 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:27.707401 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.707413 | orchestrator | 2026-02-09 06:24:27.707424 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-09 06:24:27.707435 | orchestrator | Monday 09 February 2026 06:24:01 +0000 (0:00:01.942) 0:02:53.673 ******* 2026-02-09 06:24:27.707445 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:24:27.707456 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.707467 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:24:27.707478 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:24:27.707488 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:24:27.707499 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:24:27.707510 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:24:27.707520 | orchestrator | 2026-02-09 06:24:27.707531 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-09 06:24:27.707542 | orchestrator | Monday 09 February 2026 06:24:04 +0000 (0:00:03.121) 0:02:56.794 ******* 2026-02-09 06:24:27.707570 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-09 06:24:27.707583 | orchestrator | 2026-02-09 06:24:27.707594 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-09 06:24:27.707605 | orchestrator | Monday 09 February 2026 06:24:08 +0000 (0:00:03.116) 0:02:59.911 ******* 2026-02-09 06:24:27.707616 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:27.707627 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:27.707638 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:27.707648 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.707659 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:27.707669 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:27.707680 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.707691 | orchestrator | 2026-02-09 06:24:27.707702 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-09 06:24:27.707713 | orchestrator | Monday 09 February 2026 06:24:10 +0000 (0:00:02.210) 0:03:02.121 ******* 2026-02-09 06:24:27.707723 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:27.707749 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:27.707760 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:27.707771 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.707781 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:27.707792 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:27.707803 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.707814 | orchestrator | 2026-02-09 06:24:27.707824 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-09 06:24:27.707835 | orchestrator | Monday 09 February 2026 06:24:12 +0000 (0:00:02.209) 0:03:04.331 ******* 2026-02-09 06:24:27.707846 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:27.707857 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:27.707867 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:27.707878 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.707889 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:27.707900 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:27.707910 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.707921 | orchestrator | 2026-02-09 06:24:27.707932 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-09 06:24:27.707962 | orchestrator | Monday 09 February 2026 06:24:14 +0000 (0:00:01.923) 0:03:06.254 ******* 2026-02-09 06:24:27.707974 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:27.707985 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:27.707995 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:27.708006 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.708016 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:27.708027 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:27.708038 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.708048 | orchestrator | 2026-02-09 06:24:27.708059 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-09 06:24:27.708070 | orchestrator | Monday 09 February 2026 06:24:16 +0000 (0:00:02.240) 0:03:08.495 ******* 2026-02-09 06:24:27.708080 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:27.708091 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:27.708101 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:27.708112 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.708122 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:27.708133 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:27.708143 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.708154 | orchestrator | 2026-02-09 06:24:27.708165 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-09 06:24:27.708175 | orchestrator | Monday 09 February 2026 06:24:18 +0000 (0:00:02.078) 0:03:10.574 ******* 2026-02-09 06:24:27.708186 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:27.708197 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:27.708207 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:27.708218 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.708273 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:27.708288 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:27.708300 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.708312 | orchestrator | 2026-02-09 06:24:27.708324 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-09 06:24:27.708336 | orchestrator | Monday 09 February 2026 06:24:20 +0000 (0:00:02.130) 0:03:12.705 ******* 2026-02-09 06:24:27.708347 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:27.708359 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:27.708370 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:27.708382 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.708394 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:27.708405 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:27.708417 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.708429 | orchestrator | 2026-02-09 06:24:27.708441 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-09 06:24:27.708461 | orchestrator | Monday 09 February 2026 06:24:23 +0000 (0:00:02.171) 0:03:14.876 ******* 2026-02-09 06:24:27.708473 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:27.708485 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:27.708497 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:27.708509 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.708520 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:27.708532 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:27.708543 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.708555 | orchestrator | 2026-02-09 06:24:27.708567 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-09 06:24:27.708579 | orchestrator | Monday 09 February 2026 06:24:25 +0000 (0:00:02.467) 0:03:17.344 ******* 2026-02-09 06:24:27.708590 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:27.708602 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:27.708614 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:27.708625 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:27.708637 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:27.708648 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:27.708660 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:27.708672 | orchestrator | 2026-02-09 06:24:27.708684 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-09 06:24:27.708703 | orchestrator | Monday 09 February 2026 06:24:27 +0000 (0:00:02.134) 0:03:19.478 ******* 2026-02-09 06:24:53.911218 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:53.911345 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:53.911357 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:53.911367 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.911375 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:53.911384 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:53.911394 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:53.911403 | orchestrator | 2026-02-09 06:24:53.911413 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-09 06:24:53.911423 | orchestrator | Monday 09 February 2026 06:24:29 +0000 (0:00:02.040) 0:03:21.519 ******* 2026-02-09 06:24:53.911432 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:53.911440 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:53.911449 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:53.911458 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.911466 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:53.911475 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:53.911500 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:53.911509 | orchestrator | 2026-02-09 06:24:53.911518 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-09 06:24:53.911527 | orchestrator | Monday 09 February 2026 06:24:31 +0000 (0:00:02.129) 0:03:23.648 ******* 2026-02-09 06:24:53.911536 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:53.911544 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:53.911553 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:53.911562 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.911570 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:53.911578 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:53.911587 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:53.911595 | orchestrator | 2026-02-09 06:24:53.911604 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-09 06:24:53.911613 | orchestrator | Monday 09 February 2026 06:24:33 +0000 (0:00:01.918) 0:03:25.566 ******* 2026-02-09 06:24:53.911622 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:53.911630 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:53.911639 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:53.911649 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 06:24:53.911684 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 06:24:53.911693 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.911702 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 06:24:53.911711 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 06:24:53.911719 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:53.911728 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 06:24:53.911736 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 06:24:53.911745 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:53.911753 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:53.911762 | orchestrator | 2026-02-09 06:24:53.911770 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-09 06:24:53.911779 | orchestrator | Monday 09 February 2026 06:24:35 +0000 (0:00:02.120) 0:03:27.686 ******* 2026-02-09 06:24:53.911787 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:53.911796 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:53.911804 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:53.911813 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.911821 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:53.911830 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:53.911838 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:53.911847 | orchestrator | 2026-02-09 06:24:53.911855 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-09 06:24:53.911864 | orchestrator | Monday 09 February 2026 06:24:37 +0000 (0:00:01.934) 0:03:29.621 ******* 2026-02-09 06:24:53.911872 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:53.911881 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:53.911889 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:53.911898 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.911906 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:53.911915 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:53.911924 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:53.911933 | orchestrator | 2026-02-09 06:24:53.911960 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-09 06:24:53.911969 | orchestrator | Monday 09 February 2026 06:24:39 +0000 (0:00:02.155) 0:03:31.777 ******* 2026-02-09 06:24:53.911978 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:53.911986 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:53.911995 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:53.912003 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.912011 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:53.912020 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:53.912028 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:53.912037 | orchestrator | 2026-02-09 06:24:53.912045 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-09 06:24:53.912054 | orchestrator | Monday 09 February 2026 06:24:41 +0000 (0:00:01.992) 0:03:33.769 ******* 2026-02-09 06:24:53.912062 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:53.912071 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:53.912096 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:53.912105 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.912114 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:53.912122 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:53.912138 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:53.912147 | orchestrator | 2026-02-09 06:24:53.912156 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-09 06:24:53.912164 | orchestrator | Monday 09 February 2026 06:24:44 +0000 (0:00:02.253) 0:03:36.022 ******* 2026-02-09 06:24:53.912173 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:53.912181 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:53.912190 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:53.912198 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.912207 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:53.912215 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:53.912224 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:53.912232 | orchestrator | 2026-02-09 06:24:53.912245 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-09 06:24:53.912254 | orchestrator | Monday 09 February 2026 06:24:46 +0000 (0:00:02.059) 0:03:38.081 ******* 2026-02-09 06:24:53.912263 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:53.912271 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:53.912280 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:53.912288 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.912297 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:53.912305 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:53.912314 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:53.912322 | orchestrator | 2026-02-09 06:24:53.912331 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-09 06:24:53.912340 | orchestrator | Monday 09 February 2026 06:24:48 +0000 (0:00:01.886) 0:03:39.968 ******* 2026-02-09 06:24:53.912348 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:24:53.912357 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:24:53.912365 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:24:53.912373 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:24:53.912383 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 06:24:53.912392 | orchestrator | 2026-02-09 06:24:53.912400 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-09 06:24:53.912409 | orchestrator | Monday 09 February 2026 06:24:50 +0000 (0:00:02.441) 0:03:42.410 ******* 2026-02-09 06:24:53.912418 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:24:53.912427 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:24:53.912436 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:24:53.912444 | orchestrator | 2026-02-09 06:24:53.912453 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-09 06:24:53.912461 | orchestrator | Monday 09 February 2026 06:24:51 +0000 (0:00:01.360) 0:03:43.770 ******* 2026-02-09 06:24:53.912470 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 06:24:53.912479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 06:24:53.912487 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.912496 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 06:24:53.912505 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 06:24:53.912513 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:24:53.912522 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 06:24:53.912531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 06:24:53.912546 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:24:53.912554 | orchestrator | 2026-02-09 06:24:53.912563 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-09 06:24:53.912571 | orchestrator | Monday 09 February 2026 06:24:53 +0000 (0:00:01.448) 0:03:45.219 ******* 2026-02-09 06:24:53.912582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}, 'ansible_loop_var': 'item'})  2026-02-09 06:24:53.912594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}, 'ansible_loop_var': 'item'})  2026-02-09 06:24:53.912603 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:24:53.912620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:02.868344 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:02.868491 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:02.868531 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:02.868545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:02.868556 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:02.868568 | orchestrator | 2026-02-09 06:25:02.868580 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-09 06:25:02.868592 | orchestrator | Monday 09 February 2026 06:24:55 +0000 (0:00:01.684) 0:03:46.903 ******* 2026-02-09 06:25:02.868604 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:02.868616 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:02.868627 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:02.868638 | orchestrator | 2026-02-09 06:25:02.868649 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-09 06:25:02.868660 | orchestrator | Monday 09 February 2026 06:24:56 +0000 (0:00:01.387) 0:03:48.290 ******* 2026-02-09 06:25:02.868670 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:02.868681 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:02.868692 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:02.868702 | orchestrator | 2026-02-09 06:25:02.868713 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-09 06:25:02.868724 | orchestrator | Monday 09 February 2026 06:24:57 +0000 (0:00:01.398) 0:03:49.689 ******* 2026-02-09 06:25:02.868735 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:02.868746 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:02.868782 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:02.868794 | orchestrator | 2026-02-09 06:25:02.868805 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-09 06:25:02.868816 | orchestrator | Monday 09 February 2026 06:24:59 +0000 (0:00:01.302) 0:03:50.991 ******* 2026-02-09 06:25:02.868826 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:02.868837 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:02.868852 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:02.868864 | orchestrator | 2026-02-09 06:25:02.868878 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-09 06:25:02.868891 | orchestrator | Monday 09 February 2026 06:25:00 +0000 (0:00:01.381) 0:03:52.373 ******* 2026-02-09 06:25:02.868904 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}) 2026-02-09 06:25:02.868919 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}) 2026-02-09 06:25:02.868933 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}) 2026-02-09 06:25:02.868971 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}) 2026-02-09 06:25:02.868985 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}) 2026-02-09 06:25:02.868999 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}) 2026-02-09 06:25:02.869011 | orchestrator | 2026-02-09 06:25:02.869024 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-09 06:25:02.869038 | orchestrator | Monday 09 February 2026 06:25:02 +0000 (0:00:02.032) 0:03:54.405 ******* 2026-02-09 06:25:02.869085 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-709cc28b-6adb-555a-83e9-344e81441f7b/osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1770608567.5506058, 'mtime': 1770608567.5466056, 'ctime': 1770608567.5466056, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-709cc28b-6adb-555a-83e9-344e81441f7b/osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:02.869105 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-244f969e-c6c5-5568-af21-d52fe589178d/osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1770608586.4218886, 'mtime': 1770608586.4168885, 'ctime': 1770608586.4168885, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-244f969e-c6c5-5568-af21-d52fe589178d/osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:02.869129 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:02.869144 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3/osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1770608570.2717092, 'mtime': 1770608570.2657092, 'ctime': 1770608570.2657092, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3/osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:02.869169 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9/osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1770608588.2039764, 'mtime': 1770608588.1989765, 'ctime': 1770608588.1989765, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9/osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:08.591636 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:08.591776 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-46be6a4f-1579-5910-a72e-9190b5238c92/osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1770608567.6176636, 'mtime': 1770608567.6126635, 'ctime': 1770608567.6126635, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-46be6a4f-1579-5910-a72e-9190b5238c92/osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:08.591822 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-fca1079b-480c-5ada-8652-888828a580b6/osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1770608587.8749707, 'mtime': 1770608587.8699706, 'ctime': 1770608587.8699706, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-fca1079b-480c-5ada-8652-888828a580b6/osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:08.591836 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:08.591847 | orchestrator | 2026-02-09 06:25:08.591858 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-09 06:25:08.591871 | orchestrator | Monday 09 February 2026 06:25:04 +0000 (0:00:01.413) 0:03:55.820 ******* 2026-02-09 06:25:08.591882 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 06:25:08.591895 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 06:25:08.591905 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:08.591915 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 06:25:08.591925 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 06:25:08.591934 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:08.592009 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 06:25:08.592020 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 06:25:08.592030 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:08.592040 | orchestrator | 2026-02-09 06:25:08.592050 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-09 06:25:08.592086 | orchestrator | Monday 09 February 2026 06:25:05 +0000 (0:00:01.333) 0:03:57.153 ******* 2026-02-09 06:25:08.592108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:08.592123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:08.592135 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:08.592146 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:08.592158 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:08.592170 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:08.592182 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:08.592194 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:08.592205 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:08.592217 | orchestrator | 2026-02-09 06:25:08.592228 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-09 06:25:08.592240 | orchestrator | Monday 09 February 2026 06:25:06 +0000 (0:00:01.381) 0:03:58.535 ******* 2026-02-09 06:25:08.592251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'})  2026-02-09 06:25:08.592263 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'})  2026-02-09 06:25:08.592274 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:08.592286 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'})  2026-02-09 06:25:08.592298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'})  2026-02-09 06:25:08.592309 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:08.592320 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'})  2026-02-09 06:25:08.592332 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'})  2026-02-09 06:25:08.592344 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:08.592355 | orchestrator | 2026-02-09 06:25:08.592367 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-09 06:25:08.592385 | orchestrator | Monday 09 February 2026 06:25:08 +0000 (0:00:01.723) 0:04:00.258 ******* 2026-02-09 06:25:08.592397 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-709cc28b-6adb-555a-83e9-344e81441f7b', 'data_vg': 'ceph-709cc28b-6adb-555a-83e9-344e81441f7b'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:08.592420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-244f969e-c6c5-5568-af21-d52fe589178d', 'data_vg': 'ceph-244f969e-c6c5-5568-af21-d52fe589178d'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:18.567627 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:18.567750 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-2c0211a0-e551-5710-9a38-56737a7f5fb3', 'data_vg': 'ceph-2c0211a0-e551-5710-9a38-56737a7f5fb3'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:18.567772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-84c19404-a9f4-50a5-b230-c81d6fb6b3c9', 'data_vg': 'ceph-84c19404-a9f4-50a5-b230-c81d6fb6b3c9'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:18.567784 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:18.567796 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-46be6a4f-1579-5910-a72e-9190b5238c92', 'data_vg': 'ceph-46be6a4f-1579-5910-a72e-9190b5238c92'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:18.567808 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-fca1079b-480c-5ada-8652-888828a580b6', 'data_vg': 'ceph-fca1079b-480c-5ada-8652-888828a580b6'}, 'ansible_loop_var': 'item'})  2026-02-09 06:25:18.567819 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:18.567831 | orchestrator | 2026-02-09 06:25:18.567843 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-09 06:25:18.567858 | orchestrator | Monday 09 February 2026 06:25:09 +0000 (0:00:01.370) 0:04:01.629 ******* 2026-02-09 06:25:18.567877 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:18.567895 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:18.567912 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:18.567931 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:18.568041 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:18.568054 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:18.568065 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:18.568076 | orchestrator | 2026-02-09 06:25:18.568088 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-09 06:25:18.568099 | orchestrator | Monday 09 February 2026 06:25:12 +0000 (0:00:02.383) 0:04:04.012 ******* 2026-02-09 06:25:18.568110 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:18.568121 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:18.568134 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:18.568147 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:18.568161 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-09 06:25:18.568174 | orchestrator | 2026-02-09 06:25:18.568187 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-09 06:25:18.568199 | orchestrator | Monday 09 February 2026 06:25:15 +0000 (0:00:02.838) 0:04:06.852 ******* 2026-02-09 06:25:18.568245 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568344 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:18.568365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568408 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568434 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:18.568462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568507 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568540 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:18.568550 | orchestrator | 2026-02-09 06:25:18.568561 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-09 06:25:18.568572 | orchestrator | Monday 09 February 2026 06:25:16 +0000 (0:00:01.531) 0:04:08.383 ******* 2026-02-09 06:25:18.568582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568653 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:18.568671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568764 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:18.568775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568828 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:18.568839 | orchestrator | 2026-02-09 06:25:18.568850 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-09 06:25:18.568861 | orchestrator | Monday 09 February 2026 06:25:18 +0000 (0:00:01.718) 0:04:10.102 ******* 2026-02-09 06:25:18.568872 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568924 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:18.568935 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.568979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:18.569003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:35.089103 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:35.089205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:35.089215 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:35.089221 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:35.089227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:35.089232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:35.089256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:35.089262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-09 06:25:35.089267 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:35.089271 | orchestrator | 2026-02-09 06:25:35.089276 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-09 06:25:35.089282 | orchestrator | Monday 09 February 2026 06:25:19 +0000 (0:00:01.434) 0:04:11.536 ******* 2026-02-09 06:25:35.089287 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:35.089291 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:35.089297 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:35.089305 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:35.089312 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:35.089319 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:35.089326 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:35.089333 | orchestrator | 2026-02-09 06:25:35.089340 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-09 06:25:35.089347 | orchestrator | Monday 09 February 2026 06:25:21 +0000 (0:00:01.954) 0:04:13.491 ******* 2026-02-09 06:25:35.089354 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:35.089361 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:35.089368 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:35.089376 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:35.089384 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:35.089392 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:35.089400 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:35.089405 | orchestrator | 2026-02-09 06:25:35.089410 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-09 06:25:35.089415 | orchestrator | Monday 09 February 2026 06:25:23 +0000 (0:00:02.171) 0:04:15.663 ******* 2026-02-09 06:25:35.089420 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:35.089424 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:35.089429 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:35.089434 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:35.089438 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:35.089443 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:35.089447 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:35.089452 | orchestrator | 2026-02-09 06:25:35.089457 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-09 06:25:35.089462 | orchestrator | Monday 09 February 2026 06:25:26 +0000 (0:00:02.200) 0:04:17.863 ******* 2026-02-09 06:25:35.089466 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:35.089471 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:35.089475 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:35.089480 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:35.089484 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:35.089489 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:35.089493 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:35.089497 | orchestrator | 2026-02-09 06:25:35.089502 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-09 06:25:35.089507 | orchestrator | Monday 09 February 2026 06:25:27 +0000 (0:00:01.886) 0:04:19.749 ******* 2026-02-09 06:25:35.089512 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:35.089516 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:35.089520 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:35.089525 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:35.089529 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:35.089534 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:35.089538 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:35.089543 | orchestrator | 2026-02-09 06:25:35.089548 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-09 06:25:35.089558 | orchestrator | Monday 09 February 2026 06:25:30 +0000 (0:00:02.078) 0:04:21.828 ******* 2026-02-09 06:25:35.089563 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:35.089567 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:35.089572 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:35.089576 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:35.089581 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:35.089585 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:35.089590 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:35.089594 | orchestrator | 2026-02-09 06:25:35.089599 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-09 06:25:35.089603 | orchestrator | Monday 09 February 2026 06:25:32 +0000 (0:00:02.007) 0:04:23.835 ******* 2026-02-09 06:25:35.089608 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:35.089623 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:35.089628 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:35.089632 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:35.089637 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:35.089641 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:35.089646 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:35.089650 | orchestrator | 2026-02-09 06:25:35.089671 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-09 06:25:35.089679 | orchestrator | Monday 09 February 2026 06:25:34 +0000 (0:00:02.190) 0:04:26.025 ******* 2026-02-09 06:25:35.089687 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:35.089697 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:35.089706 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:35.089716 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:35.089725 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:35.089735 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:35.089742 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:35.089748 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:35.089753 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:35.089759 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:35.089765 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:35.089770 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:35.089776 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:35.089787 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:35.089792 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:35.089848 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:35.089853 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:35.089859 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:35.089864 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:35.089870 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:35.089876 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:35.089881 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:35.089892 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:35.089903 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:39.301811 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:39.301921 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:39.301934 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:39.301987 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:39.301994 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:39.302000 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:39.302008 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:39.302061 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:39.302069 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:39.302074 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:39.302102 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:39.302108 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:39.302114 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:39.302120 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:39.302125 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:39.302131 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:39.302137 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:39.302143 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:39.302148 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:39.302154 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:39.302160 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:39.302165 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:39.302186 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:39.302191 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:39.302197 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:39.302203 | orchestrator | 2026-02-09 06:25:39.302226 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-09 06:25:39.302235 | orchestrator | Monday 09 February 2026 06:25:36 +0000 (0:00:02.244) 0:04:28.270 ******* 2026-02-09 06:25:39.302240 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:39.302245 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:39.302251 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:39.302256 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:39.302261 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:39.302267 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:39.302272 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:39.302277 | orchestrator | 2026-02-09 06:25:39.302283 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-09 06:25:39.302289 | orchestrator | Monday 09 February 2026 06:25:38 +0000 (0:00:02.130) 0:04:30.400 ******* 2026-02-09 06:25:39.302295 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:39.302305 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:39.302311 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:39.302316 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:39.302322 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:39.302327 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:39.302333 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:39.302340 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:39.302346 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:39.302352 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:39.302359 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:39.302365 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:39.302372 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:39.302379 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:39.302385 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:39.302391 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:39.302397 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:39.302403 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:39.302410 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:39.302416 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:39.302423 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:39.302434 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:59.599183 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:59.599360 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:59.599378 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:59.599390 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:59.599401 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:59.599414 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:59.599424 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:59.599434 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:59.599444 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:59.599510 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:59.599523 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:59.599533 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:59.599545 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:59.599555 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:59.599565 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:59.599574 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:59.599584 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:59.599594 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-09 06:25:59.599603 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-09 06:25:59.599613 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-09 06:25:59.599622 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-09 06:25:59.599647 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:59.599658 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-09 06:25:59.599687 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:59.599700 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:59.599712 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-09 06:25:59.599723 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:59.599734 | orchestrator | 2026-02-09 06:25:59.599746 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-09 06:25:59.599758 | orchestrator | Monday 09 February 2026 06:25:40 +0000 (0:00:02.302) 0:04:32.703 ******* 2026-02-09 06:25:59.599769 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:59.599781 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:59.599792 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:59.599803 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:59.599814 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:59.599825 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:59.599836 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:59.599847 | orchestrator | 2026-02-09 06:25:59.599858 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-09 06:25:59.599869 | orchestrator | Monday 09 February 2026 06:25:43 +0000 (0:00:02.253) 0:04:34.957 ******* 2026-02-09 06:25:59.599880 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:59.599892 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:59.599903 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:59.599914 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:59.599925 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:59.599936 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:59.599976 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:59.599989 | orchestrator | 2026-02-09 06:25:59.600000 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-09 06:25:59.600011 | orchestrator | Monday 09 February 2026 06:25:45 +0000 (0:00:02.150) 0:04:37.107 ******* 2026-02-09 06:25:59.600023 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:59.600034 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:59.600045 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:59.600055 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:59.600064 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:59.600073 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:59.600083 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:59.600092 | orchestrator | 2026-02-09 06:25:59.600102 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-09 06:25:59.600112 | orchestrator | Monday 09 February 2026 06:25:47 +0000 (0:00:02.452) 0:04:39.560 ******* 2026-02-09 06:25:59.600121 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-09 06:25:59.600134 | orchestrator | 2026-02-09 06:25:59.600144 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-09 06:25:59.600154 | orchestrator | Monday 09 February 2026 06:25:50 +0000 (0:00:02.865) 0:04:42.426 ******* 2026-02-09 06:25:59.600164 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 06:25:59.600182 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 06:25:59.600192 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 06:25:59.600201 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 06:25:59.600211 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 06:25:59.600220 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 06:25:59.600229 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-09 06:25:59.600239 | orchestrator | 2026-02-09 06:25:59.600248 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-09 06:25:59.600258 | orchestrator | Monday 09 February 2026 06:25:52 +0000 (0:00:02.133) 0:04:44.560 ******* 2026-02-09 06:25:59.600267 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:59.600277 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:59.600286 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:59.600296 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:59.600305 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:59.600315 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:59.600324 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:59.600334 | orchestrator | 2026-02-09 06:25:59.600343 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-09 06:25:59.600353 | orchestrator | Monday 09 February 2026 06:25:54 +0000 (0:00:02.152) 0:04:46.712 ******* 2026-02-09 06:25:59.600362 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:25:59.600372 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:25:59.600381 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:25:59.600391 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:25:59.600400 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:25:59.600409 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:25:59.600419 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:25:59.600428 | orchestrator | 2026-02-09 06:25:59.600443 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-09 06:25:59.600453 | orchestrator | Monday 09 February 2026 06:25:57 +0000 (0:00:02.440) 0:04:49.153 ******* 2026-02-09 06:25:59.600463 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:25:59.600473 | orchestrator | ok: [testbed-node-1] 2026-02-09 06:25:59.600483 | orchestrator | ok: [testbed-node-2] 2026-02-09 06:25:59.600492 | orchestrator | ok: [testbed-node-3] 2026-02-09 06:25:59.600502 | orchestrator | ok: [testbed-node-4] 2026-02-09 06:25:59.600511 | orchestrator | ok: [testbed-node-5] 2026-02-09 06:25:59.600527 | orchestrator | ok: [testbed-manager] 2026-02-09 06:26:45.574972 | orchestrator | 2026-02-09 06:26:45.575069 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-09 06:26:45.575079 | orchestrator | Monday 09 February 2026 06:25:59 +0000 (0:00:02.218) 0:04:51.372 ******* 2026-02-09 06:26:45.575086 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:26:45.575095 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:26:45.575101 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:26:45.575108 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:26:45.575114 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:26:45.575120 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:26:45.575126 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:26:45.575133 | orchestrator | 2026-02-09 06:26:45.575139 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-09 06:26:45.575146 | orchestrator | Monday 09 February 2026 06:26:02 +0000 (0:00:02.449) 0:04:53.821 ******* 2026-02-09 06:26:45.575152 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:26:45.575159 | orchestrator | skipping: [testbed-node-1] 2026-02-09 06:26:45.575165 | orchestrator | skipping: [testbed-node-2] 2026-02-09 06:26:45.575190 | orchestrator | skipping: [testbed-node-3] 2026-02-09 06:26:45.575196 | orchestrator | skipping: [testbed-node-4] 2026-02-09 06:26:45.575202 | orchestrator | skipping: [testbed-node-5] 2026-02-09 06:26:45.575208 | orchestrator | skipping: [testbed-manager] 2026-02-09 06:26:45.575214 | orchestrator | 2026-02-09 06:26:45.575221 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-09 06:26:45.575227 | orchestrator | Monday 09 February 2026 06:26:04 +0000 (0:00:02.478) 0:04:56.300 ******* 2026-02-09 06:26:45.575233 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575239 | orchestrator | 2026-02-09 06:26:45.575246 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-09 06:26:45.575252 | orchestrator | Monday 09 February 2026 06:26:07 +0000 (0:00:02.620) 0:04:58.920 ******* 2026-02-09 06:26:45.575258 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:26:45.575264 | orchestrator | 2026-02-09 06:26:45.575270 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-09 06:26:45.575276 | orchestrator | 2026-02-09 06:26:45.575282 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-09 06:26:45.575288 | orchestrator | Monday 09 February 2026 06:26:09 +0000 (0:00:01.972) 0:05:00.892 ******* 2026-02-09 06:26:45.575294 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575300 | orchestrator | 2026-02-09 06:26:45.575306 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-09 06:26:45.575312 | orchestrator | Monday 09 February 2026 06:26:10 +0000 (0:00:01.464) 0:05:02.357 ******* 2026-02-09 06:26:45.575318 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575324 | orchestrator | 2026-02-09 06:26:45.575330 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-09 06:26:45.575336 | orchestrator | Monday 09 February 2026 06:26:11 +0000 (0:00:01.182) 0:05:03.539 ******* 2026-02-09 06:26:45.575344 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-09 06:26:45.575352 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-09 06:26:45.575358 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-09 06:26:45.575365 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-09 06:26:45.575385 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-09 06:26:45.575407 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}])  2026-02-09 06:26:45.575421 | orchestrator | 2026-02-09 06:26:45.575428 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-09 06:26:45.575434 | orchestrator | 2026-02-09 06:26:45.575440 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-09 06:26:45.575447 | orchestrator | Monday 09 February 2026 06:26:21 +0000 (0:00:10.058) 0:05:13.598 ******* 2026-02-09 06:26:45.575453 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575459 | orchestrator | 2026-02-09 06:26:45.575464 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-09 06:26:45.575470 | orchestrator | Monday 09 February 2026 06:26:23 +0000 (0:00:01.532) 0:05:15.130 ******* 2026-02-09 06:26:45.575476 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575482 | orchestrator | 2026-02-09 06:26:45.575488 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-09 06:26:45.575494 | orchestrator | Monday 09 February 2026 06:26:24 +0000 (0:00:01.195) 0:05:16.326 ******* 2026-02-09 06:26:45.575500 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:26:45.575507 | orchestrator | 2026-02-09 06:26:45.575514 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-09 06:26:45.575520 | orchestrator | Monday 09 February 2026 06:26:25 +0000 (0:00:01.187) 0:05:17.513 ******* 2026-02-09 06:26:45.575527 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575534 | orchestrator | 2026-02-09 06:26:45.575541 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-09 06:26:45.575547 | orchestrator | Monday 09 February 2026 06:26:26 +0000 (0:00:01.160) 0:05:18.674 ******* 2026-02-09 06:26:45.575553 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-09 06:26:45.575560 | orchestrator | 2026-02-09 06:26:45.575566 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-09 06:26:45.575573 | orchestrator | Monday 09 February 2026 06:26:27 +0000 (0:00:01.107) 0:05:19.781 ******* 2026-02-09 06:26:45.575580 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575587 | orchestrator | 2026-02-09 06:26:45.575594 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-09 06:26:45.575601 | orchestrator | Monday 09 February 2026 06:26:29 +0000 (0:00:01.474) 0:05:21.256 ******* 2026-02-09 06:26:45.575609 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575616 | orchestrator | 2026-02-09 06:26:45.575623 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-09 06:26:45.575630 | orchestrator | Monday 09 February 2026 06:26:30 +0000 (0:00:01.168) 0:05:22.424 ******* 2026-02-09 06:26:45.575637 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575644 | orchestrator | 2026-02-09 06:26:45.575651 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-09 06:26:45.575658 | orchestrator | Monday 09 February 2026 06:26:32 +0000 (0:00:01.533) 0:05:23.958 ******* 2026-02-09 06:26:45.575664 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575671 | orchestrator | 2026-02-09 06:26:45.575678 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-09 06:26:45.575684 | orchestrator | Monday 09 February 2026 06:26:33 +0000 (0:00:01.126) 0:05:25.085 ******* 2026-02-09 06:26:45.575691 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575698 | orchestrator | 2026-02-09 06:26:45.575705 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-09 06:26:45.575712 | orchestrator | Monday 09 February 2026 06:26:34 +0000 (0:00:01.127) 0:05:26.212 ******* 2026-02-09 06:26:45.575719 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575726 | orchestrator | 2026-02-09 06:26:45.575732 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-09 06:26:45.575740 | orchestrator | Monday 09 February 2026 06:26:35 +0000 (0:00:01.154) 0:05:27.367 ******* 2026-02-09 06:26:45.575751 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:26:45.575758 | orchestrator | 2026-02-09 06:26:45.575765 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-09 06:26:45.575773 | orchestrator | Monday 09 February 2026 06:26:36 +0000 (0:00:01.164) 0:05:28.532 ******* 2026-02-09 06:26:45.575780 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575787 | orchestrator | 2026-02-09 06:26:45.575793 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-09 06:26:45.575801 | orchestrator | Monday 09 February 2026 06:26:37 +0000 (0:00:01.172) 0:05:29.704 ******* 2026-02-09 06:26:45.575808 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:26:45.575815 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 06:26:45.575822 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 06:26:45.575829 | orchestrator | 2026-02-09 06:26:45.575835 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-09 06:26:45.575843 | orchestrator | Monday 09 February 2026 06:26:39 +0000 (0:00:01.687) 0:05:31.392 ******* 2026-02-09 06:26:45.575849 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:26:45.575856 | orchestrator | 2026-02-09 06:26:45.575863 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-09 06:26:45.575869 | orchestrator | Monday 09 February 2026 06:26:40 +0000 (0:00:01.265) 0:05:32.658 ******* 2026-02-09 06:26:45.575875 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:26:45.575882 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 06:26:45.575892 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 06:26:45.575898 | orchestrator | 2026-02-09 06:26:45.575904 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-09 06:26:45.575911 | orchestrator | Monday 09 February 2026 06:26:44 +0000 (0:00:03.232) 0:05:35.890 ******* 2026-02-09 06:26:45.575917 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 06:26:45.575924 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 06:26:45.575936 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 06:27:08.838642 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.838795 | orchestrator | 2026-02-09 06:27:08.838825 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-09 06:27:08.838846 | orchestrator | Monday 09 February 2026 06:26:45 +0000 (0:00:01.464) 0:05:37.355 ******* 2026-02-09 06:27:08.838868 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-09 06:27:08.838891 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-09 06:27:08.838910 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-09 06:27:08.838929 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.839033 | orchestrator | 2026-02-09 06:27:08.839054 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-09 06:27:08.839073 | orchestrator | Monday 09 February 2026 06:26:47 +0000 (0:00:01.945) 0:05:39.301 ******* 2026-02-09 06:27:08.839095 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:08.839152 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:08.839176 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:08.839196 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.839216 | orchestrator | 2026-02-09 06:27:08.839230 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-09 06:27:08.839243 | orchestrator | Monday 09 February 2026 06:26:48 +0000 (0:00:01.185) 0:05:40.487 ******* 2026-02-09 06:27:08.839258 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2317507ded62', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-09 06:26:41.427392', 'end': '2026-02-09 06:26:41.486484', 'delta': '0:00:00.059092', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2317507ded62'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-09 06:27:08.839314 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'ab15bd6989cf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-09 06:26:42.028275', 'end': '2026-02-09 06:26:42.076807', 'delta': '0:00:00.048532', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ab15bd6989cf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-09 06:27:08.839329 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '08d9b4f0b230', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-09 06:26:42.862074', 'end': '2026-02-09 06:26:42.913473', 'delta': '0:00:00.051399', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['08d9b4f0b230'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-09 06:27:08.839344 | orchestrator | 2026-02-09 06:27:08.839356 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-09 06:27:08.839369 | orchestrator | Monday 09 February 2026 06:26:49 +0000 (0:00:01.181) 0:05:41.668 ******* 2026-02-09 06:27:08.839381 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:27:08.839405 | orchestrator | 2026-02-09 06:27:08.839419 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-09 06:27:08.839432 | orchestrator | Monday 09 February 2026 06:26:51 +0000 (0:00:01.665) 0:05:43.334 ******* 2026-02-09 06:27:08.839445 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.839458 | orchestrator | 2026-02-09 06:27:08.839470 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-09 06:27:08.839483 | orchestrator | Monday 09 February 2026 06:26:52 +0000 (0:00:01.310) 0:05:44.645 ******* 2026-02-09 06:27:08.839496 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:27:08.839509 | orchestrator | 2026-02-09 06:27:08.839520 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-09 06:27:08.839531 | orchestrator | Monday 09 February 2026 06:26:54 +0000 (0:00:01.183) 0:05:45.828 ******* 2026-02-09 06:27:08.839542 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-09 06:27:08.839553 | orchestrator | 2026-02-09 06:27:08.839564 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 06:27:08.839575 | orchestrator | Monday 09 February 2026 06:26:56 +0000 (0:00:02.095) 0:05:47.924 ******* 2026-02-09 06:27:08.839586 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:27:08.839596 | orchestrator | 2026-02-09 06:27:08.839607 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-09 06:27:08.839618 | orchestrator | Monday 09 February 2026 06:26:57 +0000 (0:00:01.129) 0:05:49.054 ******* 2026-02-09 06:27:08.839629 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.839639 | orchestrator | 2026-02-09 06:27:08.839650 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-09 06:27:08.839661 | orchestrator | Monday 09 February 2026 06:26:58 +0000 (0:00:01.115) 0:05:50.170 ******* 2026-02-09 06:27:08.839672 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.839682 | orchestrator | 2026-02-09 06:27:08.839693 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-09 06:27:08.839704 | orchestrator | Monday 09 February 2026 06:26:59 +0000 (0:00:01.205) 0:05:51.376 ******* 2026-02-09 06:27:08.839715 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.839726 | orchestrator | 2026-02-09 06:27:08.839736 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-09 06:27:08.839747 | orchestrator | Monday 09 February 2026 06:27:00 +0000 (0:00:01.135) 0:05:52.511 ******* 2026-02-09 06:27:08.839758 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.839769 | orchestrator | 2026-02-09 06:27:08.839780 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-09 06:27:08.839790 | orchestrator | Monday 09 February 2026 06:27:01 +0000 (0:00:01.117) 0:05:53.629 ******* 2026-02-09 06:27:08.839801 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.839812 | orchestrator | 2026-02-09 06:27:08.839823 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-09 06:27:08.839833 | orchestrator | Monday 09 February 2026 06:27:02 +0000 (0:00:01.148) 0:05:54.778 ******* 2026-02-09 06:27:08.839844 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.839855 | orchestrator | 2026-02-09 06:27:08.839866 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-09 06:27:08.839877 | orchestrator | Monday 09 February 2026 06:27:04 +0000 (0:00:01.150) 0:05:55.928 ******* 2026-02-09 06:27:08.839887 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.839898 | orchestrator | 2026-02-09 06:27:08.839909 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-09 06:27:08.839920 | orchestrator | Monday 09 February 2026 06:27:05 +0000 (0:00:01.138) 0:05:57.066 ******* 2026-02-09 06:27:08.839930 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.839972 | orchestrator | 2026-02-09 06:27:08.839985 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-09 06:27:08.839997 | orchestrator | Monday 09 February 2026 06:27:06 +0000 (0:00:01.159) 0:05:58.226 ******* 2026-02-09 06:27:08.840015 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:08.840026 | orchestrator | 2026-02-09 06:27:08.840042 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-09 06:27:08.840053 | orchestrator | Monday 09 February 2026 06:27:07 +0000 (0:00:01.161) 0:05:59.387 ******* 2026-02-09 06:27:08.840074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:27:10.072278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:27:10.072390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:27:10.072409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-54-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-09 06:27:10.072424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:27:10.072436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:27:10.072447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:27:10.072499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e53c6ccf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-09 06:27:10.072539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:27:10.072552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-09 06:27:10.072564 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:10.072577 | orchestrator | 2026-02-09 06:27:10.072588 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-09 06:27:10.072600 | orchestrator | Monday 09 February 2026 06:27:08 +0000 (0:00:01.236) 0:06:00.624 ******* 2026-02-09 06:27:10.072613 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:10.072626 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:10.072678 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:10.072701 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-09-02-24-54-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:24.842332 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:24.842483 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:24.842501 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:24.842562 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e53c6ccf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1', 'scsi-SQEMU_QEMU_HARDDISK_e53c6ccf-ffc4-4947-a04a-5ba76f724671-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:24.842606 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:24.842619 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-09 06:27:24.842631 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:24.842646 | orchestrator | 2026-02-09 06:27:24.842658 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-09 06:27:24.842672 | orchestrator | Monday 09 February 2026 06:27:10 +0000 (0:00:01.234) 0:06:01.858 ******* 2026-02-09 06:27:24.842683 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:27:24.842696 | orchestrator | 2026-02-09 06:27:24.842708 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-09 06:27:24.842719 | orchestrator | Monday 09 February 2026 06:27:11 +0000 (0:00:01.498) 0:06:03.358 ******* 2026-02-09 06:27:24.842731 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:27:24.842742 | orchestrator | 2026-02-09 06:27:24.842754 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 06:27:24.842765 | orchestrator | Monday 09 February 2026 06:27:12 +0000 (0:00:01.135) 0:06:04.493 ******* 2026-02-09 06:27:24.842777 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:27:24.842796 | orchestrator | 2026-02-09 06:27:24.842808 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 06:27:24.842820 | orchestrator | Monday 09 February 2026 06:27:14 +0000 (0:00:01.496) 0:06:05.990 ******* 2026-02-09 06:27:24.842834 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:24.842848 | orchestrator | 2026-02-09 06:27:24.842862 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-09 06:27:24.842875 | orchestrator | Monday 09 February 2026 06:27:15 +0000 (0:00:01.130) 0:06:07.120 ******* 2026-02-09 06:27:24.842888 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:24.842901 | orchestrator | 2026-02-09 06:27:24.842914 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-09 06:27:24.842929 | orchestrator | Monday 09 February 2026 06:27:16 +0000 (0:00:01.262) 0:06:08.383 ******* 2026-02-09 06:27:24.842969 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:24.842982 | orchestrator | 2026-02-09 06:27:24.842995 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-09 06:27:24.843007 | orchestrator | Monday 09 February 2026 06:27:17 +0000 (0:00:01.214) 0:06:09.598 ******* 2026-02-09 06:27:24.843020 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:27:24.843033 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-09 06:27:24.843045 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-09 06:27:24.843058 | orchestrator | 2026-02-09 06:27:24.843071 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-09 06:27:24.843083 | orchestrator | Monday 09 February 2026 06:27:19 +0000 (0:00:02.155) 0:06:11.754 ******* 2026-02-09 06:27:24.843096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 06:27:24.843109 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 06:27:24.843122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 06:27:24.843139 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:24.843152 | orchestrator | 2026-02-09 06:27:24.843165 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-09 06:27:24.843178 | orchestrator | Monday 09 February 2026 06:27:21 +0000 (0:00:01.291) 0:06:13.046 ******* 2026-02-09 06:27:24.843190 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:27:24.843203 | orchestrator | 2026-02-09 06:27:24.843215 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-09 06:27:24.843225 | orchestrator | Monday 09 February 2026 06:27:22 +0000 (0:00:01.208) 0:06:14.254 ******* 2026-02-09 06:27:24.843236 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:27:24.843247 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 06:27:24.843259 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 06:27:24.843270 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-09 06:27:24.843280 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 06:27:24.843299 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 06:28:27.492834 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 06:28:27.493035 | orchestrator | 2026-02-09 06:28:27.493058 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-09 06:28:27.493071 | orchestrator | Monday 09 February 2026 06:27:24 +0000 (0:00:02.367) 0:06:16.622 ******* 2026-02-09 06:28:27.493082 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:28:27.493094 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 06:28:27.493105 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 06:28:27.493116 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-09 06:28:27.493150 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-09 06:28:27.493161 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-09 06:28:27.493171 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-09 06:28:27.493182 | orchestrator | 2026-02-09 06:28:27.493193 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-09 06:28:27.493203 | orchestrator | Monday 09 February 2026 06:27:28 +0000 (0:00:03.448) 0:06:20.070 ******* 2026-02-09 06:28:27.493214 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-09 06:28:27.493225 | orchestrator | 2026-02-09 06:28:27.493236 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-09 06:28:27.493246 | orchestrator | Monday 09 February 2026 06:27:30 +0000 (0:00:02.332) 0:06:22.403 ******* 2026-02-09 06:28:27.493257 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.493268 | orchestrator | 2026-02-09 06:28:27.493279 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-09 06:28:27.493289 | orchestrator | Monday 09 February 2026 06:27:31 +0000 (0:00:01.313) 0:06:23.716 ******* 2026-02-09 06:28:27.493300 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.493311 | orchestrator | 2026-02-09 06:28:27.493321 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-09 06:28:27.493331 | orchestrator | Monday 09 February 2026 06:27:33 +0000 (0:00:01.140) 0:06:24.857 ******* 2026-02-09 06:28:27.493342 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-09 06:28:27.493356 | orchestrator | 2026-02-09 06:28:27.493368 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-09 06:28:27.493381 | orchestrator | Monday 09 February 2026 06:27:35 +0000 (0:00:02.263) 0:06:27.120 ******* 2026-02-09 06:28:27.493393 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.493405 | orchestrator | 2026-02-09 06:28:27.493418 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-09 06:28:27.493431 | orchestrator | Monday 09 February 2026 06:27:36 +0000 (0:00:01.271) 0:06:28.392 ******* 2026-02-09 06:28:27.493443 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:28:27.493456 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-09 06:28:27.493469 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-09 06:28:27.493482 | orchestrator | 2026-02-09 06:28:27.493495 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-09 06:28:27.493507 | orchestrator | Monday 09 February 2026 06:27:39 +0000 (0:00:02.572) 0:06:30.965 ******* 2026-02-09 06:28:27.493520 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-09 06:28:27.493533 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-09 06:28:27.493546 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-09 06:28:27.493558 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-09 06:28:27.493571 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-09 06:28:27.493584 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-09 06:28:27.493596 | orchestrator | 2026-02-09 06:28:27.493609 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-09 06:28:27.493635 | orchestrator | Monday 09 February 2026 06:27:52 +0000 (0:00:13.392) 0:06:44.357 ******* 2026-02-09 06:28:27.493649 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:28:27.493663 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:28:27.493675 | orchestrator | 2026-02-09 06:28:27.493695 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-09 06:28:27.493707 | orchestrator | Monday 09 February 2026 06:27:56 +0000 (0:00:03.984) 0:06:48.341 ******* 2026-02-09 06:28:27.493718 | orchestrator | changed: [testbed-node-0] 2026-02-09 06:28:27.493728 | orchestrator | 2026-02-09 06:28:27.493738 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-09 06:28:27.493749 | orchestrator | Monday 09 February 2026 06:27:59 +0000 (0:00:02.517) 0:06:50.859 ******* 2026-02-09 06:28:27.493759 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-09 06:28:27.493770 | orchestrator | 2026-02-09 06:28:27.493780 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-09 06:28:27.493791 | orchestrator | Monday 09 February 2026 06:28:00 +0000 (0:00:01.498) 0:06:52.358 ******* 2026-02-09 06:28:27.493820 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-09 06:28:27.493831 | orchestrator | 2026-02-09 06:28:27.493842 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-09 06:28:27.493853 | orchestrator | Monday 09 February 2026 06:28:02 +0000 (0:00:01.619) 0:06:53.977 ******* 2026-02-09 06:28:27.493864 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:28:27.493874 | orchestrator | 2026-02-09 06:28:27.493885 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-09 06:28:27.493896 | orchestrator | Monday 09 February 2026 06:28:03 +0000 (0:00:01.616) 0:06:55.594 ******* 2026-02-09 06:28:27.493906 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.493917 | orchestrator | 2026-02-09 06:28:27.493928 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-09 06:28:27.493957 | orchestrator | Monday 09 February 2026 06:28:04 +0000 (0:00:01.156) 0:06:56.750 ******* 2026-02-09 06:28:27.493968 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.493979 | orchestrator | 2026-02-09 06:28:27.493989 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-09 06:28:27.494000 | orchestrator | Monday 09 February 2026 06:28:06 +0000 (0:00:01.219) 0:06:57.970 ******* 2026-02-09 06:28:27.494010 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.494087 | orchestrator | 2026-02-09 06:28:27.494098 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-09 06:28:27.494109 | orchestrator | Monday 09 February 2026 06:28:07 +0000 (0:00:01.148) 0:06:59.119 ******* 2026-02-09 06:28:27.494120 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:28:27.494131 | orchestrator | 2026-02-09 06:28:27.494141 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-09 06:28:27.494152 | orchestrator | Monday 09 February 2026 06:28:08 +0000 (0:00:01.646) 0:07:00.765 ******* 2026-02-09 06:28:27.494163 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.494174 | orchestrator | 2026-02-09 06:28:27.494184 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-09 06:28:27.494195 | orchestrator | Monday 09 February 2026 06:28:10 +0000 (0:00:01.208) 0:07:01.974 ******* 2026-02-09 06:28:27.494206 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.494217 | orchestrator | 2026-02-09 06:28:27.494227 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-09 06:28:27.494238 | orchestrator | Monday 09 February 2026 06:28:11 +0000 (0:00:01.269) 0:07:03.243 ******* 2026-02-09 06:28:27.494249 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:28:27.494260 | orchestrator | 2026-02-09 06:28:27.494270 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-09 06:28:27.494281 | orchestrator | Monday 09 February 2026 06:28:13 +0000 (0:00:01.567) 0:07:04.811 ******* 2026-02-09 06:28:27.494292 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:28:27.494303 | orchestrator | 2026-02-09 06:28:27.494314 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-09 06:28:27.494324 | orchestrator | Monday 09 February 2026 06:28:14 +0000 (0:00:01.571) 0:07:06.382 ******* 2026-02-09 06:28:27.494343 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.494354 | orchestrator | 2026-02-09 06:28:27.494365 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-09 06:28:27.494376 | orchestrator | Monday 09 February 2026 06:28:15 +0000 (0:00:01.150) 0:07:07.532 ******* 2026-02-09 06:28:27.494387 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:28:27.494398 | orchestrator | 2026-02-09 06:28:27.494408 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-09 06:28:27.494419 | orchestrator | Monday 09 February 2026 06:28:16 +0000 (0:00:01.220) 0:07:08.753 ******* 2026-02-09 06:28:27.494430 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.494441 | orchestrator | 2026-02-09 06:28:27.494451 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-09 06:28:27.494462 | orchestrator | Monday 09 February 2026 06:28:18 +0000 (0:00:01.131) 0:07:09.885 ******* 2026-02-09 06:28:27.494473 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.494484 | orchestrator | 2026-02-09 06:28:27.494495 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-09 06:28:27.494505 | orchestrator | Monday 09 February 2026 06:28:19 +0000 (0:00:01.119) 0:07:11.004 ******* 2026-02-09 06:28:27.494516 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.494527 | orchestrator | 2026-02-09 06:28:27.494538 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-09 06:28:27.494550 | orchestrator | Monday 09 February 2026 06:28:20 +0000 (0:00:01.149) 0:07:12.153 ******* 2026-02-09 06:28:27.494568 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.494587 | orchestrator | 2026-02-09 06:28:27.494605 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-09 06:28:27.494623 | orchestrator | Monday 09 February 2026 06:28:21 +0000 (0:00:01.207) 0:07:13.361 ******* 2026-02-09 06:28:27.494648 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.494666 | orchestrator | 2026-02-09 06:28:27.494684 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-09 06:28:27.494702 | orchestrator | Monday 09 February 2026 06:28:22 +0000 (0:00:01.237) 0:07:14.598 ******* 2026-02-09 06:28:27.494720 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:28:27.494739 | orchestrator | 2026-02-09 06:28:27.494757 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-09 06:28:27.494774 | orchestrator | Monday 09 February 2026 06:28:23 +0000 (0:00:01.197) 0:07:15.796 ******* 2026-02-09 06:28:27.494793 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:28:27.494805 | orchestrator | 2026-02-09 06:28:27.494815 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-09 06:28:27.494826 | orchestrator | Monday 09 February 2026 06:28:25 +0000 (0:00:01.164) 0:07:16.960 ******* 2026-02-09 06:28:27.494837 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:28:27.494847 | orchestrator | 2026-02-09 06:28:27.494858 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-09 06:28:27.494868 | orchestrator | Monday 09 February 2026 06:28:26 +0000 (0:00:01.185) 0:07:18.146 ******* 2026-02-09 06:28:27.494879 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:28:27.494890 | orchestrator | 2026-02-09 06:28:27.494900 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-09 06:28:27.494922 | orchestrator | Monday 09 February 2026 06:28:27 +0000 (0:00:01.128) 0:07:19.275 ******* 2026-02-09 06:29:17.393475 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.393593 | orchestrator | 2026-02-09 06:29:17.393612 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-09 06:29:17.393625 | orchestrator | Monday 09 February 2026 06:28:28 +0000 (0:00:01.153) 0:07:20.428 ******* 2026-02-09 06:29:17.393637 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.393648 | orchestrator | 2026-02-09 06:29:17.393658 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-09 06:29:17.393669 | orchestrator | Monday 09 February 2026 06:28:29 +0000 (0:00:01.141) 0:07:21.570 ******* 2026-02-09 06:29:17.393704 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.393716 | orchestrator | 2026-02-09 06:29:17.393727 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-09 06:29:17.393737 | orchestrator | Monday 09 February 2026 06:28:30 +0000 (0:00:01.159) 0:07:22.730 ******* 2026-02-09 06:29:17.393748 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.393758 | orchestrator | 2026-02-09 06:29:17.393769 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-09 06:29:17.393779 | orchestrator | Monday 09 February 2026 06:28:32 +0000 (0:00:01.326) 0:07:24.057 ******* 2026-02-09 06:29:17.393790 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.393801 | orchestrator | 2026-02-09 06:29:17.393811 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-09 06:29:17.393822 | orchestrator | Monday 09 February 2026 06:28:33 +0000 (0:00:01.158) 0:07:25.216 ******* 2026-02-09 06:29:17.393832 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.393843 | orchestrator | 2026-02-09 06:29:17.393853 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-09 06:29:17.393865 | orchestrator | Monday 09 February 2026 06:28:34 +0000 (0:00:01.109) 0:07:26.325 ******* 2026-02-09 06:29:17.393875 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.393886 | orchestrator | 2026-02-09 06:29:17.393896 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-09 06:29:17.393907 | orchestrator | Monday 09 February 2026 06:28:35 +0000 (0:00:01.115) 0:07:27.441 ******* 2026-02-09 06:29:17.393917 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.393928 | orchestrator | 2026-02-09 06:29:17.393991 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-09 06:29:17.394006 | orchestrator | Monday 09 February 2026 06:28:36 +0000 (0:00:01.130) 0:07:28.572 ******* 2026-02-09 06:29:17.394082 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.394096 | orchestrator | 2026-02-09 06:29:17.394109 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-09 06:29:17.394122 | orchestrator | Monday 09 February 2026 06:28:37 +0000 (0:00:01.143) 0:07:29.715 ******* 2026-02-09 06:29:17.394134 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.394147 | orchestrator | 2026-02-09 06:29:17.394159 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-09 06:29:17.394171 | orchestrator | Monday 09 February 2026 06:28:39 +0000 (0:00:01.100) 0:07:30.816 ******* 2026-02-09 06:29:17.394184 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.394196 | orchestrator | 2026-02-09 06:29:17.394209 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-09 06:29:17.394221 | orchestrator | Monday 09 February 2026 06:28:40 +0000 (0:00:01.103) 0:07:31.920 ******* 2026-02-09 06:29:17.394235 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:29:17.394249 | orchestrator | 2026-02-09 06:29:17.394261 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-09 06:29:17.394273 | orchestrator | Monday 09 February 2026 06:28:42 +0000 (0:00:01.955) 0:07:33.876 ******* 2026-02-09 06:29:17.394284 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:29:17.394295 | orchestrator | 2026-02-09 06:29:17.394306 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-09 06:29:17.394317 | orchestrator | Monday 09 February 2026 06:28:44 +0000 (0:00:02.343) 0:07:36.220 ******* 2026-02-09 06:29:17.394327 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-09 06:29:17.394340 | orchestrator | 2026-02-09 06:29:17.394351 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-09 06:29:17.394361 | orchestrator | Monday 09 February 2026 06:28:45 +0000 (0:00:01.533) 0:07:37.753 ******* 2026-02-09 06:29:17.394372 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.394383 | orchestrator | 2026-02-09 06:29:17.394394 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-09 06:29:17.394414 | orchestrator | Monday 09 February 2026 06:28:47 +0000 (0:00:01.141) 0:07:38.894 ******* 2026-02-09 06:29:17.394425 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.394435 | orchestrator | 2026-02-09 06:29:17.394462 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-09 06:29:17.394473 | orchestrator | Monday 09 February 2026 06:28:48 +0000 (0:00:01.135) 0:07:40.030 ******* 2026-02-09 06:29:17.394484 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-09 06:29:17.394495 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-09 06:29:17.394506 | orchestrator | 2026-02-09 06:29:17.394516 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-09 06:29:17.394527 | orchestrator | Monday 09 February 2026 06:28:50 +0000 (0:00:01.813) 0:07:41.843 ******* 2026-02-09 06:29:17.394538 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:29:17.394549 | orchestrator | 2026-02-09 06:29:17.394560 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-09 06:29:17.394571 | orchestrator | Monday 09 February 2026 06:28:51 +0000 (0:00:01.656) 0:07:43.500 ******* 2026-02-09 06:29:17.394582 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.394593 | orchestrator | 2026-02-09 06:29:17.394603 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-09 06:29:17.394614 | orchestrator | Monday 09 February 2026 06:28:52 +0000 (0:00:01.181) 0:07:44.682 ******* 2026-02-09 06:29:17.394625 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.394636 | orchestrator | 2026-02-09 06:29:17.394666 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-09 06:29:17.394678 | orchestrator | Monday 09 February 2026 06:28:54 +0000 (0:00:01.166) 0:07:45.848 ******* 2026-02-09 06:29:17.394689 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.394700 | orchestrator | 2026-02-09 06:29:17.394710 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-09 06:29:17.394721 | orchestrator | Monday 09 February 2026 06:28:55 +0000 (0:00:01.166) 0:07:47.015 ******* 2026-02-09 06:29:17.394732 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-09 06:29:17.394742 | orchestrator | 2026-02-09 06:29:17.394753 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-09 06:29:17.394764 | orchestrator | Monday 09 February 2026 06:28:56 +0000 (0:00:01.459) 0:07:48.474 ******* 2026-02-09 06:29:17.394774 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:29:17.394785 | orchestrator | 2026-02-09 06:29:17.394796 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-09 06:29:17.394806 | orchestrator | Monday 09 February 2026 06:28:58 +0000 (0:00:01.691) 0:07:50.166 ******* 2026-02-09 06:29:17.394817 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-09 06:29:17.394827 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-09 06:29:17.394838 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-09 06:29:17.394849 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.394859 | orchestrator | 2026-02-09 06:29:17.394870 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-09 06:29:17.394881 | orchestrator | Monday 09 February 2026 06:28:59 +0000 (0:00:01.127) 0:07:51.294 ******* 2026-02-09 06:29:17.394892 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.394902 | orchestrator | 2026-02-09 06:29:17.394913 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-09 06:29:17.394924 | orchestrator | Monday 09 February 2026 06:29:00 +0000 (0:00:01.136) 0:07:52.430 ******* 2026-02-09 06:29:17.394934 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.394977 | orchestrator | 2026-02-09 06:29:17.394988 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-09 06:29:17.394999 | orchestrator | Monday 09 February 2026 06:29:01 +0000 (0:00:01.143) 0:07:53.574 ******* 2026-02-09 06:29:17.395017 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.395028 | orchestrator | 2026-02-09 06:29:17.395038 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-09 06:29:17.395049 | orchestrator | Monday 09 February 2026 06:29:02 +0000 (0:00:01.152) 0:07:54.726 ******* 2026-02-09 06:29:17.395059 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.395070 | orchestrator | 2026-02-09 06:29:17.395080 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-09 06:29:17.395091 | orchestrator | Monday 09 February 2026 06:29:04 +0000 (0:00:01.125) 0:07:55.852 ******* 2026-02-09 06:29:17.395101 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.395112 | orchestrator | 2026-02-09 06:29:17.395123 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-09 06:29:17.395134 | orchestrator | Monday 09 February 2026 06:29:05 +0000 (0:00:01.176) 0:07:57.028 ******* 2026-02-09 06:29:17.395144 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:29:17.395155 | orchestrator | 2026-02-09 06:29:17.395165 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-09 06:29:17.395176 | orchestrator | Monday 09 February 2026 06:29:07 +0000 (0:00:02.550) 0:07:59.579 ******* 2026-02-09 06:29:17.395187 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:29:17.395197 | orchestrator | 2026-02-09 06:29:17.395208 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-09 06:29:17.395219 | orchestrator | Monday 09 February 2026 06:29:08 +0000 (0:00:01.151) 0:08:00.730 ******* 2026-02-09 06:29:17.395229 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-09 06:29:17.395240 | orchestrator | 2026-02-09 06:29:17.395250 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-09 06:29:17.395261 | orchestrator | Monday 09 February 2026 06:29:10 +0000 (0:00:01.485) 0:08:02.216 ******* 2026-02-09 06:29:17.395271 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.395282 | orchestrator | 2026-02-09 06:29:17.395292 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-09 06:29:17.395303 | orchestrator | Monday 09 February 2026 06:29:11 +0000 (0:00:01.186) 0:08:03.403 ******* 2026-02-09 06:29:17.395314 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.395324 | orchestrator | 2026-02-09 06:29:17.395341 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-09 06:29:17.395352 | orchestrator | Monday 09 February 2026 06:29:12 +0000 (0:00:01.155) 0:08:04.559 ******* 2026-02-09 06:29:17.395362 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.395373 | orchestrator | 2026-02-09 06:29:17.395383 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-09 06:29:17.395394 | orchestrator | Monday 09 February 2026 06:29:13 +0000 (0:00:01.153) 0:08:05.712 ******* 2026-02-09 06:29:17.395404 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.395415 | orchestrator | 2026-02-09 06:29:17.395425 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-09 06:29:17.395436 | orchestrator | Monday 09 February 2026 06:29:15 +0000 (0:00:01.145) 0:08:06.858 ******* 2026-02-09 06:29:17.395447 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.395457 | orchestrator | 2026-02-09 06:29:17.395468 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-09 06:29:17.395478 | orchestrator | Monday 09 February 2026 06:29:16 +0000 (0:00:01.187) 0:08:08.046 ******* 2026-02-09 06:29:17.395489 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:29:17.395499 | orchestrator | 2026-02-09 06:29:17.395509 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-09 06:29:17.395527 | orchestrator | Monday 09 February 2026 06:29:17 +0000 (0:00:01.126) 0:08:09.172 ******* 2026-02-09 06:30:02.699693 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.699804 | orchestrator | 2026-02-09 06:30:02.699817 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-09 06:30:02.699850 | orchestrator | Monday 09 February 2026 06:29:18 +0000 (0:00:01.145) 0:08:10.318 ******* 2026-02-09 06:30:02.699862 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.699873 | orchestrator | 2026-02-09 06:30:02.699885 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-09 06:30:02.699897 | orchestrator | Monday 09 February 2026 06:29:19 +0000 (0:00:01.123) 0:08:11.442 ******* 2026-02-09 06:30:02.699908 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:30:02.699920 | orchestrator | 2026-02-09 06:30:02.699932 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-09 06:30:02.699971 | orchestrator | Monday 09 February 2026 06:29:20 +0000 (0:00:01.245) 0:08:12.688 ******* 2026-02-09 06:30:02.699983 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-09 06:30:02.699997 | orchestrator | 2026-02-09 06:30:02.700006 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-09 06:30:02.700013 | orchestrator | Monday 09 February 2026 06:29:22 +0000 (0:00:01.576) 0:08:14.264 ******* 2026-02-09 06:30:02.700020 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-09 06:30:02.700027 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-09 06:30:02.700034 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-09 06:30:02.700041 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-09 06:30:02.700047 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-09 06:30:02.700054 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-09 06:30:02.700060 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-09 06:30:02.700067 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-09 06:30:02.700074 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-09 06:30:02.700081 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-09 06:30:02.700087 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-09 06:30:02.700094 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-09 06:30:02.700100 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-09 06:30:02.700107 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-09 06:30:02.700114 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-09 06:30:02.700120 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-09 06:30:02.700127 | orchestrator | 2026-02-09 06:30:02.700134 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-09 06:30:02.700140 | orchestrator | Monday 09 February 2026 06:29:29 +0000 (0:00:06.774) 0:08:21.039 ******* 2026-02-09 06:30:02.700147 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700153 | orchestrator | 2026-02-09 06:30:02.700160 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-09 06:30:02.700167 | orchestrator | Monday 09 February 2026 06:29:30 +0000 (0:00:01.206) 0:08:22.245 ******* 2026-02-09 06:30:02.700173 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700180 | orchestrator | 2026-02-09 06:30:02.700187 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-09 06:30:02.700193 | orchestrator | Monday 09 February 2026 06:29:31 +0000 (0:00:01.146) 0:08:23.392 ******* 2026-02-09 06:30:02.700200 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700206 | orchestrator | 2026-02-09 06:30:02.700213 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-09 06:30:02.700219 | orchestrator | Monday 09 February 2026 06:29:32 +0000 (0:00:01.150) 0:08:24.543 ******* 2026-02-09 06:30:02.700228 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700236 | orchestrator | 2026-02-09 06:30:02.700243 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-09 06:30:02.700251 | orchestrator | Monday 09 February 2026 06:29:33 +0000 (0:00:01.094) 0:08:25.638 ******* 2026-02-09 06:30:02.700269 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700277 | orchestrator | 2026-02-09 06:30:02.700285 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-09 06:30:02.700293 | orchestrator | Monday 09 February 2026 06:29:34 +0000 (0:00:01.120) 0:08:26.759 ******* 2026-02-09 06:30:02.700301 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700309 | orchestrator | 2026-02-09 06:30:02.700330 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-09 06:30:02.700338 | orchestrator | Monday 09 February 2026 06:29:36 +0000 (0:00:01.196) 0:08:27.955 ******* 2026-02-09 06:30:02.700346 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700354 | orchestrator | 2026-02-09 06:30:02.700361 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-09 06:30:02.700370 | orchestrator | Monday 09 February 2026 06:29:37 +0000 (0:00:01.130) 0:08:29.086 ******* 2026-02-09 06:30:02.700377 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700385 | orchestrator | 2026-02-09 06:30:02.700393 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-09 06:30:02.700401 | orchestrator | Monday 09 February 2026 06:29:38 +0000 (0:00:01.183) 0:08:30.269 ******* 2026-02-09 06:30:02.700409 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700417 | orchestrator | 2026-02-09 06:30:02.700424 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-09 06:30:02.700432 | orchestrator | Monday 09 February 2026 06:29:39 +0000 (0:00:01.110) 0:08:31.380 ******* 2026-02-09 06:30:02.700440 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700448 | orchestrator | 2026-02-09 06:30:02.700455 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-09 06:30:02.700479 | orchestrator | Monday 09 February 2026 06:29:40 +0000 (0:00:01.157) 0:08:32.538 ******* 2026-02-09 06:30:02.700487 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700495 | orchestrator | 2026-02-09 06:30:02.700502 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-09 06:30:02.700510 | orchestrator | Monday 09 February 2026 06:29:41 +0000 (0:00:01.123) 0:08:33.661 ******* 2026-02-09 06:30:02.700518 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700525 | orchestrator | 2026-02-09 06:30:02.700533 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-09 06:30:02.700541 | orchestrator | Monday 09 February 2026 06:29:42 +0000 (0:00:01.093) 0:08:34.754 ******* 2026-02-09 06:30:02.700548 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700557 | orchestrator | 2026-02-09 06:30:02.700565 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-09 06:30:02.700574 | orchestrator | Monday 09 February 2026 06:29:44 +0000 (0:00:01.249) 0:08:36.004 ******* 2026-02-09 06:30:02.700581 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700588 | orchestrator | 2026-02-09 06:30:02.700594 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-09 06:30:02.700601 | orchestrator | Monday 09 February 2026 06:29:45 +0000 (0:00:01.115) 0:08:37.120 ******* 2026-02-09 06:30:02.700608 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700615 | orchestrator | 2026-02-09 06:30:02.700626 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-09 06:30:02.700637 | orchestrator | Monday 09 February 2026 06:29:46 +0000 (0:00:01.189) 0:08:38.310 ******* 2026-02-09 06:30:02.700647 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700658 | orchestrator | 2026-02-09 06:30:02.700670 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-09 06:30:02.700681 | orchestrator | Monday 09 February 2026 06:29:47 +0000 (0:00:01.089) 0:08:39.400 ******* 2026-02-09 06:30:02.700691 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700702 | orchestrator | 2026-02-09 06:30:02.700715 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-09 06:30:02.700737 | orchestrator | Monday 09 February 2026 06:29:48 +0000 (0:00:01.086) 0:08:40.486 ******* 2026-02-09 06:30:02.700744 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700750 | orchestrator | 2026-02-09 06:30:02.700757 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-09 06:30:02.700763 | orchestrator | Monday 09 February 2026 06:29:49 +0000 (0:00:01.134) 0:08:41.620 ******* 2026-02-09 06:30:02.700770 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700777 | orchestrator | 2026-02-09 06:30:02.700783 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-09 06:30:02.700790 | orchestrator | Monday 09 February 2026 06:29:50 +0000 (0:00:01.132) 0:08:42.753 ******* 2026-02-09 06:30:02.700796 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700803 | orchestrator | 2026-02-09 06:30:02.700809 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-09 06:30:02.700816 | orchestrator | Monday 09 February 2026 06:29:52 +0000 (0:00:01.130) 0:08:43.883 ******* 2026-02-09 06:30:02.700822 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700829 | orchestrator | 2026-02-09 06:30:02.700835 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-09 06:30:02.700842 | orchestrator | Monday 09 February 2026 06:29:53 +0000 (0:00:01.144) 0:08:45.027 ******* 2026-02-09 06:30:02.700848 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-09 06:30:02.700855 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-09 06:30:02.700861 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-09 06:30:02.700868 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700874 | orchestrator | 2026-02-09 06:30:02.700881 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-09 06:30:02.700887 | orchestrator | Monday 09 February 2026 06:29:55 +0000 (0:00:01.778) 0:08:46.806 ******* 2026-02-09 06:30:02.700894 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-09 06:30:02.700900 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-09 06:30:02.700907 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-09 06:30:02.700913 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.700920 | orchestrator | 2026-02-09 06:30:02.700926 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-09 06:30:02.700933 | orchestrator | Monday 09 February 2026 06:29:56 +0000 (0:00:01.520) 0:08:48.327 ******* 2026-02-09 06:30:02.700981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-09 06:30:02.700991 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-09 06:30:02.700998 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-09 06:30:02.701005 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.701012 | orchestrator | 2026-02-09 06:30:02.701020 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-09 06:30:02.701027 | orchestrator | Monday 09 February 2026 06:29:57 +0000 (0:00:01.372) 0:08:49.699 ******* 2026-02-09 06:30:02.701034 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.701041 | orchestrator | 2026-02-09 06:30:02.701048 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-09 06:30:02.701055 | orchestrator | Monday 09 February 2026 06:29:59 +0000 (0:00:01.243) 0:08:50.942 ******* 2026-02-09 06:30:02.701063 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-09 06:30:02.701070 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:30:02.701077 | orchestrator | 2026-02-09 06:30:02.701084 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-09 06:30:02.701091 | orchestrator | Monday 09 February 2026 06:30:00 +0000 (0:00:01.452) 0:08:52.394 ******* 2026-02-09 06:30:02.701098 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:30:02.701106 | orchestrator | 2026-02-09 06:30:02.701119 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-09 06:30:02.701132 | orchestrator | Monday 09 February 2026 06:30:02 +0000 (0:00:02.086) 0:08:54.481 ******* 2026-02-09 06:31:29.714661 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.714811 | orchestrator | 2026-02-09 06:31:29.714833 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-09 06:31:29.714847 | orchestrator | Monday 09 February 2026 06:30:03 +0000 (0:00:01.215) 0:08:55.697 ******* 2026-02-09 06:31:29.714858 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-09 06:31:29.714870 | orchestrator | 2026-02-09 06:31:29.714881 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-09 06:31:29.714893 | orchestrator | Monday 09 February 2026 06:30:05 +0000 (0:00:01.584) 0:08:57.282 ******* 2026-02-09 06:31:29.714905 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-09 06:31:29.714916 | orchestrator | 2026-02-09 06:31:29.714927 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-09 06:31:29.714938 | orchestrator | Monday 09 February 2026 06:30:09 +0000 (0:00:03.519) 0:09:00.801 ******* 2026-02-09 06:31:29.714949 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:31:29.715030 | orchestrator | 2026-02-09 06:31:29.715041 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-09 06:31:29.715053 | orchestrator | Monday 09 February 2026 06:30:10 +0000 (0:00:01.113) 0:09:01.915 ******* 2026-02-09 06:31:29.715064 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.715075 | orchestrator | 2026-02-09 06:31:29.715086 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-09 06:31:29.715097 | orchestrator | Monday 09 February 2026 06:30:11 +0000 (0:00:01.183) 0:09:03.098 ******* 2026-02-09 06:31:29.715107 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.715118 | orchestrator | 2026-02-09 06:31:29.715129 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-09 06:31:29.715140 | orchestrator | Monday 09 February 2026 06:30:12 +0000 (0:00:01.181) 0:09:04.279 ******* 2026-02-09 06:31:29.715151 | orchestrator | changed: [testbed-node-0] 2026-02-09 06:31:29.715164 | orchestrator | 2026-02-09 06:31:29.715177 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-09 06:31:29.715189 | orchestrator | Monday 09 February 2026 06:30:14 +0000 (0:00:01.973) 0:09:06.253 ******* 2026-02-09 06:31:29.715202 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.715214 | orchestrator | 2026-02-09 06:31:29.715226 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-09 06:31:29.715239 | orchestrator | Monday 09 February 2026 06:30:16 +0000 (0:00:01.656) 0:09:07.910 ******* 2026-02-09 06:31:29.715252 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.715264 | orchestrator | 2026-02-09 06:31:29.715277 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-09 06:31:29.715326 | orchestrator | Monday 09 February 2026 06:30:17 +0000 (0:00:01.539) 0:09:09.449 ******* 2026-02-09 06:31:29.715340 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.715353 | orchestrator | 2026-02-09 06:31:29.715365 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-09 06:31:29.715378 | orchestrator | Monday 09 February 2026 06:30:19 +0000 (0:00:01.491) 0:09:10.941 ******* 2026-02-09 06:31:29.715391 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.715404 | orchestrator | 2026-02-09 06:31:29.715416 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-09 06:31:29.715428 | orchestrator | Monday 09 February 2026 06:30:20 +0000 (0:00:01.728) 0:09:12.669 ******* 2026-02-09 06:31:29.715442 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.715455 | orchestrator | 2026-02-09 06:31:29.715467 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-09 06:31:29.715480 | orchestrator | Monday 09 February 2026 06:30:22 +0000 (0:00:01.717) 0:09:14.386 ******* 2026-02-09 06:31:29.715493 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-09 06:31:29.715532 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-09 06:31:29.715546 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-09 06:31:29.715558 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-09 06:31:29.715571 | orchestrator | 2026-02-09 06:31:29.715581 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-09 06:31:29.715592 | orchestrator | Monday 09 February 2026 06:30:26 +0000 (0:00:03.961) 0:09:18.348 ******* 2026-02-09 06:31:29.715603 | orchestrator | changed: [testbed-node-0] 2026-02-09 06:31:29.715613 | orchestrator | 2026-02-09 06:31:29.715624 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-09 06:31:29.715635 | orchestrator | Monday 09 February 2026 06:30:28 +0000 (0:00:02.036) 0:09:20.385 ******* 2026-02-09 06:31:29.715645 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.715670 | orchestrator | 2026-02-09 06:31:29.715681 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-09 06:31:29.715692 | orchestrator | Monday 09 February 2026 06:30:29 +0000 (0:00:01.186) 0:09:21.571 ******* 2026-02-09 06:31:29.715703 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.715713 | orchestrator | 2026-02-09 06:31:29.715724 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-09 06:31:29.715735 | orchestrator | Monday 09 February 2026 06:30:30 +0000 (0:00:01.220) 0:09:22.792 ******* 2026-02-09 06:31:29.715746 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.715756 | orchestrator | 2026-02-09 06:31:29.715767 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-09 06:31:29.715778 | orchestrator | Monday 09 February 2026 06:30:33 +0000 (0:00:02.196) 0:09:24.988 ******* 2026-02-09 06:31:29.715788 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.715799 | orchestrator | 2026-02-09 06:31:29.715809 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-09 06:31:29.715820 | orchestrator | Monday 09 February 2026 06:30:34 +0000 (0:00:01.521) 0:09:26.510 ******* 2026-02-09 06:31:29.715830 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:31:29.715841 | orchestrator | 2026-02-09 06:31:29.715852 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-09 06:31:29.715862 | orchestrator | Monday 09 February 2026 06:30:35 +0000 (0:00:01.158) 0:09:27.669 ******* 2026-02-09 06:31:29.715893 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-09 06:31:29.715904 | orchestrator | 2026-02-09 06:31:29.715915 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-09 06:31:29.715926 | orchestrator | Monday 09 February 2026 06:30:37 +0000 (0:00:01.568) 0:09:29.237 ******* 2026-02-09 06:31:29.715937 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:31:29.715947 | orchestrator | 2026-02-09 06:31:29.715982 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-09 06:31:29.715993 | orchestrator | Monday 09 February 2026 06:30:38 +0000 (0:00:01.152) 0:09:30.390 ******* 2026-02-09 06:31:29.716004 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:31:29.716014 | orchestrator | 2026-02-09 06:31:29.716025 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-09 06:31:29.716036 | orchestrator | Monday 09 February 2026 06:30:39 +0000 (0:00:01.101) 0:09:31.491 ******* 2026-02-09 06:31:29.716046 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-09 06:31:29.716057 | orchestrator | 2026-02-09 06:31:29.716067 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-09 06:31:29.716078 | orchestrator | Monday 09 February 2026 06:30:41 +0000 (0:00:01.485) 0:09:32.976 ******* 2026-02-09 06:31:29.716088 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.716099 | orchestrator | 2026-02-09 06:31:29.716110 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-09 06:31:29.716120 | orchestrator | Monday 09 February 2026 06:30:43 +0000 (0:00:02.377) 0:09:35.354 ******* 2026-02-09 06:31:29.716140 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.716151 | orchestrator | 2026-02-09 06:31:29.716162 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-09 06:31:29.716172 | orchestrator | Monday 09 February 2026 06:30:45 +0000 (0:00:01.964) 0:09:37.319 ******* 2026-02-09 06:31:29.716183 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.716194 | orchestrator | 2026-02-09 06:31:29.716204 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-09 06:31:29.716215 | orchestrator | Monday 09 February 2026 06:30:47 +0000 (0:00:02.349) 0:09:39.668 ******* 2026-02-09 06:31:29.716226 | orchestrator | changed: [testbed-node-0] 2026-02-09 06:31:29.716237 | orchestrator | 2026-02-09 06:31:29.716247 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-09 06:31:29.716258 | orchestrator | Monday 09 February 2026 06:30:51 +0000 (0:00:03.265) 0:09:42.933 ******* 2026-02-09 06:31:29.716269 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-09 06:31:29.716279 | orchestrator | 2026-02-09 06:31:29.716290 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-09 06:31:29.716301 | orchestrator | Monday 09 February 2026 06:30:52 +0000 (0:00:01.590) 0:09:44.524 ******* 2026-02-09 06:31:29.716312 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-09 06:31:29.716323 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.716333 | orchestrator | 2026-02-09 06:31:29.716344 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-09 06:31:29.716354 | orchestrator | Monday 09 February 2026 06:31:15 +0000 (0:00:22.925) 0:10:07.450 ******* 2026-02-09 06:31:29.716365 | orchestrator | ok: [testbed-node-0] 2026-02-09 06:31:29.716376 | orchestrator | 2026-02-09 06:31:29.716386 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-09 06:31:29.716397 | orchestrator | Monday 09 February 2026 06:31:18 +0000 (0:00:03.071) 0:10:10.521 ******* 2026-02-09 06:31:29.716407 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:31:29.716418 | orchestrator | 2026-02-09 06:31:29.716428 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-09 06:31:29.716439 | orchestrator | Monday 09 February 2026 06:31:19 +0000 (0:00:01.119) 0:10:11.641 ******* 2026-02-09 06:31:29.716468 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-09 06:31:29.716487 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-09 06:31:29.716499 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-09 06:31:29.716510 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-09 06:31:29.716530 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-09 06:59:38.199838 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__402bc5dcba607d801c5b522727a20e9adb754111'}])  2026-02-09 06:59:38.200857 | orchestrator | 2026-02-09 06:59:38.200897 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-09 06:59:38.200912 | orchestrator | Monday 09 February 2026 06:31:29 +0000 (0:00:09.854) 0:10:21.496 ******* 2026-02-09 06:59:38.200924 | orchestrator | changed: [testbed-node-0] 2026-02-09 06:59:38.200936 | orchestrator | 2026-02-09 06:59:38.200947 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-09 06:59:38.200958 | orchestrator | Monday 09 February 2026 06:31:32 +0000 (0:00:02.517) 0:10:24.014 ******* 2026-02-09 06:59:38.200969 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-09 06:59:38.200980 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-09 06:59:38.200991 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-09 06:59:38.201002 | orchestrator | 2026-02-09 06:59:38.201013 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-09 06:59:38.201024 | orchestrator | Monday 09 February 2026 06:31:34 +0000 (0:00:02.179) 0:10:26.194 ******* 2026-02-09 06:59:38.201035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-09 06:59:38.201046 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-09 06:59:38.201056 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-09 06:59:38.201067 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:59:38.201078 | orchestrator | 2026-02-09 06:59:38.201090 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-09 06:59:38.201100 | orchestrator | Monday 09 February 2026 06:31:35 +0000 (0:00:01.405) 0:10:27.599 ******* 2026-02-09 06:59:38.201111 | orchestrator | skipping: [testbed-node-0] 2026-02-09 06:59:38.201122 | orchestrator | 2026-02-09 06:59:38.201132 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-09 06:59:38.201144 | orchestrator | Monday 09 February 2026 06:31:37 +0000 (0:00:01.201) 0:10:28.800 ******* 2026-02-09 06:59:38.201155 | orchestrator | 2026-02-09 06:59:38.201166 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201177 | orchestrator | 2026-02-09 06:59:38.201188 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201199 | orchestrator | 2026-02-09 06:59:38.201210 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201221 | orchestrator | 2026-02-09 06:59:38.201231 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201242 | orchestrator | 2026-02-09 06:59:38.201253 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201263 | orchestrator | 2026-02-09 06:59:38.201274 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201285 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-02-09 06:59:38.201297 | orchestrator | 2026-02-09 06:59:38.201308 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201343 | orchestrator | 2026-02-09 06:59:38.201370 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201381 | orchestrator | 2026-02-09 06:59:38.201392 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201402 | orchestrator | 2026-02-09 06:59:38.201413 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201423 | orchestrator | 2026-02-09 06:59:38.201434 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201445 | orchestrator | 2026-02-09 06:59:38.201456 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201467 | orchestrator | 2026-02-09 06:59:38.201477 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201488 | orchestrator | 2026-02-09 06:59:38.201499 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201509 | orchestrator | 2026-02-09 06:59:38.201520 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201530 | orchestrator | 2026-02-09 06:59:38.201541 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201551 | orchestrator | 2026-02-09 06:59:38.201562 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201573 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-02-09 06:59:38.201584 | orchestrator | 2026-02-09 06:59:38.201595 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201605 | orchestrator | 2026-02-09 06:59:38.201616 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201627 | orchestrator | 2026-02-09 06:59:38.201684 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201699 | orchestrator | 2026-02-09 06:59:38.201709 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201720 | orchestrator | 2026-02-09 06:59:38.201731 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201741 | orchestrator | 2026-02-09 06:59:38.201752 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201763 | orchestrator | 2026-02-09 06:59:38.201773 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201784 | orchestrator | 2026-02-09 06:59:38.201794 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201804 | orchestrator | 2026-02-09 06:59:38.201815 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201826 | orchestrator | 2026-02-09 06:59:38.201836 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201846 | orchestrator | 2026-02-09 06:59:38.201857 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201868 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-02-09 06:59:38.201878 | orchestrator | 2026-02-09 06:59:38.201889 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201900 | orchestrator | 2026-02-09 06:59:38.201920 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201931 | orchestrator | 2026-02-09 06:59:38.201941 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201952 | orchestrator | 2026-02-09 06:59:38.201962 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201973 | orchestrator | 2026-02-09 06:59:38.201983 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.201994 | orchestrator | 2026-02-09 06:59:38.202004 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202015 | orchestrator | 2026-02-09 06:59:38.202082 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202093 | orchestrator | 2026-02-09 06:59:38.202104 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202115 | orchestrator | 2026-02-09 06:59:38.202125 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202136 | orchestrator | 2026-02-09 06:59:38.202147 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202158 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-02-09 06:59:38.202168 | orchestrator | 2026-02-09 06:59:38.202179 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202190 | orchestrator | 2026-02-09 06:59:38.202250 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202261 | orchestrator | 2026-02-09 06:59:38.202272 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202283 | orchestrator | 2026-02-09 06:59:38.202300 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202311 | orchestrator | 2026-02-09 06:59:38.202321 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202332 | orchestrator | 2026-02-09 06:59:38.202343 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202353 | orchestrator | 2026-02-09 06:59:38.202364 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202375 | orchestrator | 2026-02-09 06:59:38.202385 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202396 | orchestrator | 2026-02-09 06:59:38.202407 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202417 | orchestrator | 2026-02-09 06:59:38.202458 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202471 | orchestrator | 2026-02-09 06:59:38.202482 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202493 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-02-09 06:59:38.202503 | orchestrator | 2026-02-09 06:59:38.202514 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202525 | orchestrator | 2026-02-09 06:59:38.202535 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202546 | orchestrator | 2026-02-09 06:59:38.202556 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 06:59:38.202576 | orchestrator | 2026-02-09 06:59:38.202597 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 07:03:04.017951 | orchestrator | 2026-02-09 07:03:04.018128 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 07:03:04.018148 | orchestrator | 2026-02-09 07:03:04.018162 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 07:03:04.018173 | orchestrator | 2026-02-09 07:03:04.018185 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 07:03:04.018196 | orchestrator | 2026-02-09 07:03:04.018207 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 07:03:04.018218 | orchestrator | 2026-02-09 07:03:04.018229 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 07:03:04.018240 | orchestrator | 2026-02-09 07:03:04.018251 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-09 07:03:04.018266 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.8", "quorum_status", "--format", "json"], "delta": "0:05:00.260300", "end": "2026-02-09 07:02:55.684874", "msg": "non-zero return code", "rc": 1, "start": "2026-02-09 06:57:55.424574", "stderr": "2026-02-09T07:02:55.665+0000 72f3fce1d640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-02-09T07:02:55.665+0000 72f3fce1d640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-02-09 07:03:04.018282 | orchestrator | 2026-02-09 07:03:04.018294 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-02-09 07:03:04.018305 | orchestrator | Monday 09 February 2026 07:02:57 +0000 (0:31:20.232) 0:41:49.033 ******* 2026-02-09 07:03:04.018317 | orchestrator | ok: [testbed-node-0] 2026-02-09 07:03:04.018328 | orchestrator | 2026-02-09 07:03:04.018339 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-02-09 07:03:04.018350 | orchestrator | Monday 09 February 2026 07:02:59 +0000 (0:00:01.886) 0:41:50.919 ******* 2026-02-09 07:03:04.018361 | orchestrator | ok: [testbed-node-0] 2026-02-09 07:03:04.018372 | orchestrator | 2026-02-09 07:03:04.018383 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-02-09 07:03:04.018394 | orchestrator | Monday 09 February 2026 07:03:00 +0000 (0:00:01.763) 0:41:52.682 ******* 2026-02-09 07:03:04.018406 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-02-09 07:03:04.018418 | orchestrator | 2026-02-09 07:03:04.018429 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-09 07:03:04.018440 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-09 07:03:04.018452 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-09 07:03:04.018463 | orchestrator | testbed-node-0 : ok=121  changed=7  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-02-09 07:03:04.018493 | orchestrator | testbed-node-1 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-09 07:03:04.018508 | orchestrator | testbed-node-2 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-09 07:03:04.018548 | orchestrator | testbed-node-3 : ok=33  changed=1  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-02-09 07:03:04.018562 | orchestrator | testbed-node-4 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-09 07:03:04.018576 | orchestrator | testbed-node-5 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-09 07:03:04.018590 | orchestrator | 2026-02-09 07:03:04.018603 | orchestrator | 2026-02-09 07:03:04.018616 | orchestrator | 2026-02-09 07:03:04.018666 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-09 07:03:04.018680 | orchestrator | Monday 09 February 2026 07:03:03 +0000 (0:00:02.615) 0:41:55.298 ******* 2026-02-09 07:03:04.018694 | orchestrator | =============================================================================== 2026-02-09 07:03:04.018708 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1880.23s 2026-02-09 07:03:04.018722 | orchestrator | Gather and delegate facts ---------------------------------------------- 32.99s 2026-02-09 07:03:04.018735 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.93s 2026-02-09 07:03:04.018749 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.39s 2026-02-09 07:03:04.018781 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.10s 2026-02-09 07:03:04.018795 | orchestrator | Set cluster configs ---------------------------------------------------- 10.06s 2026-02-09 07:03:04.018809 | orchestrator | ceph-mon : Set cluster configs ------------------------------------------ 9.85s 2026-02-09 07:03:04.018823 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.77s 2026-02-09 07:03:04.018837 | orchestrator | Gather facts ------------------------------------------------------------ 6.05s 2026-02-09 07:03:04.018848 | orchestrator | Gather facts on all Ceph hosts for following reference ------------------ 4.91s 2026-02-09 07:03:04.018860 | orchestrator | Stop ceph mon ----------------------------------------------------------- 3.98s 2026-02-09 07:03:04.018871 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.96s 2026-02-09 07:03:04.018882 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.56s 2026-02-09 07:03:04.018892 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 3.52s 2026-02-09 07:03:04.018903 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 3.45s 2026-02-09 07:03:04.018914 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.44s 2026-02-09 07:03:04.018924 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 3.31s 2026-02-09 07:03:04.018935 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 3.26s 2026-02-09 07:03:04.018946 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.23s 2026-02-09 07:03:04.018957 | orchestrator | ceph-infra : Add logrotate configuration -------------------------------- 3.12s 2026-02-09 07:03:04.598129 | orchestrator | ERROR 2026-02-09 07:03:04.598596 | orchestrator | { 2026-02-09 07:03:04.598734 | orchestrator | "delta": "2:06:40.072522", 2026-02-09 07:03:04.598804 | orchestrator | "end": "2026-02-09 07:03:04.252976", 2026-02-09 07:03:04.598917 | orchestrator | "msg": "non-zero return code", 2026-02-09 07:03:04.598976 | orchestrator | "rc": 2, 2026-02-09 07:03:04.599056 | orchestrator | "start": "2026-02-09 04:56:24.180454" 2026-02-09 07:03:04.599111 | orchestrator | } failure 2026-02-09 07:03:04.844905 | 2026-02-09 07:03:04.845065 | PLAY RECAP 2026-02-09 07:03:04.845151 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-09 07:03:04.845182 | 2026-02-09 07:03:05.081287 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-09 07:03:05.082547 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-09 07:03:05.805671 | 2026-02-09 07:03:05.805839 | PLAY [Post output play] 2026-02-09 07:03:05.823134 | 2026-02-09 07:03:05.823282 | LOOP [stage-output : Register sources] 2026-02-09 07:03:05.885007 | 2026-02-09 07:03:05.885231 | TASK [stage-output : Check sudo] 2026-02-09 07:03:06.785136 | orchestrator | sudo: a password is required 2026-02-09 07:03:06.923582 | orchestrator | ok: Runtime: 0:00:00.016921 2026-02-09 07:03:06.938786 | 2026-02-09 07:03:06.938994 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-09 07:03:06.978098 | 2026-02-09 07:03:06.978366 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-09 07:03:07.055987 | orchestrator | ok 2026-02-09 07:03:07.064405 | 2026-02-09 07:03:07.064539 | LOOP [stage-output : Ensure target folders exist] 2026-02-09 07:03:07.545744 | orchestrator | ok: "docs" 2026-02-09 07:03:07.546074 | 2026-02-09 07:03:07.796062 | orchestrator | ok: "artifacts" 2026-02-09 07:03:08.059722 | orchestrator | ok: "logs" 2026-02-09 07:03:08.081563 | 2026-02-09 07:03:08.081789 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-09 07:03:08.118406 | 2026-02-09 07:03:08.118718 | TASK [stage-output : Make all log files readable] 2026-02-09 07:03:08.416677 | orchestrator | ok 2026-02-09 07:03:08.426563 | 2026-02-09 07:03:08.426744 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-09 07:03:08.471495 | orchestrator | skipping: Conditional result was False 2026-02-09 07:03:08.488413 | 2026-02-09 07:03:08.488572 | TASK [stage-output : Discover log files for compression] 2026-02-09 07:03:08.513218 | orchestrator | skipping: Conditional result was False 2026-02-09 07:03:08.529537 | 2026-02-09 07:03:08.529750 | LOOP [stage-output : Archive everything from logs] 2026-02-09 07:03:08.576655 | 2026-02-09 07:03:08.576829 | PLAY [Post cleanup play] 2026-02-09 07:03:08.585842 | 2026-02-09 07:03:08.585948 | TASK [Set cloud fact (Zuul deployment)] 2026-02-09 07:03:08.649287 | orchestrator | ok 2026-02-09 07:03:08.659874 | 2026-02-09 07:03:08.659983 | TASK [Set cloud fact (local deployment)] 2026-02-09 07:03:08.694290 | orchestrator | skipping: Conditional result was False 2026-02-09 07:03:08.711473 | 2026-02-09 07:03:08.711675 | TASK [Clean the cloud environment] 2026-02-09 07:03:09.334262 | orchestrator | 2026-02-09 07:03:09 - clean up servers 2026-02-09 07:03:10.162064 | orchestrator | 2026-02-09 07:03:10 - testbed-manager 2026-02-09 07:03:10.256806 | orchestrator | 2026-02-09 07:03:10 - testbed-node-0 2026-02-09 07:03:10.351886 | orchestrator | 2026-02-09 07:03:10 - testbed-node-5 2026-02-09 07:03:10.433287 | orchestrator | 2026-02-09 07:03:10 - testbed-node-2 2026-02-09 07:03:10.537471 | orchestrator | 2026-02-09 07:03:10 - testbed-node-3 2026-02-09 07:03:10.622476 | orchestrator | 2026-02-09 07:03:10 - testbed-node-1 2026-02-09 07:03:10.710315 | orchestrator | 2026-02-09 07:03:10 - testbed-node-4 2026-02-09 07:03:10.811044 | orchestrator | 2026-02-09 07:03:10 - clean up keypairs 2026-02-09 07:03:10.829934 | orchestrator | 2026-02-09 07:03:10 - testbed 2026-02-09 07:03:10.853499 | orchestrator | 2026-02-09 07:03:10 - wait for servers to be gone 2026-02-09 07:03:23.990868 | orchestrator | 2026-02-09 07:03:23 - clean up ports 2026-02-09 07:03:24.194680 | orchestrator | 2026-02-09 07:03:24 - 0f7f80b7-4474-4ee3-aa02-396ee794cc9d 2026-02-09 07:03:24.462823 | orchestrator | 2026-02-09 07:03:24 - 3c5e5293-6ad9-4cf4-856b-cf840925d5c6 2026-02-09 07:03:24.779678 | orchestrator | 2026-02-09 07:03:24 - 4b7adcdf-254d-4fc6-bcb9-36dd452d606e 2026-02-09 07:03:25.000129 | orchestrator | 2026-02-09 07:03:24 - 727bf1d1-0a76-486a-9553-a4334783c16a 2026-02-09 07:03:25.224407 | orchestrator | 2026-02-09 07:03:25 - 7adecf51-ad28-4d61-acdf-e50765e97f99 2026-02-09 07:03:25.532946 | orchestrator | 2026-02-09 07:03:25 - 9e93e59b-ba0e-4319-a5f6-702cf88df7b4 2026-02-09 07:03:25.748342 | orchestrator | 2026-02-09 07:03:25 - b27aed70-03c2-4438-9820-b2e9af588aff 2026-02-09 07:03:26.559511 | orchestrator | 2026-02-09 07:03:26 - clean up volumes 2026-02-09 07:03:26.680711 | orchestrator | 2026-02-09 07:03:26 - testbed-volume-1-node-base 2026-02-09 07:03:26.720177 | orchestrator | 2026-02-09 07:03:26 - testbed-volume-5-node-base 2026-02-09 07:03:26.762328 | orchestrator | 2026-02-09 07:03:26 - testbed-volume-4-node-base 2026-02-09 07:03:26.805361 | orchestrator | 2026-02-09 07:03:26 - testbed-volume-0-node-base 2026-02-09 07:03:26.849400 | orchestrator | 2026-02-09 07:03:26 - testbed-volume-2-node-base 2026-02-09 07:03:26.894003 | orchestrator | 2026-02-09 07:03:26 - testbed-volume-3-node-base 2026-02-09 07:03:26.935913 | orchestrator | 2026-02-09 07:03:26 - testbed-volume-6-node-3 2026-02-09 07:03:26.975323 | orchestrator | 2026-02-09 07:03:26 - testbed-volume-8-node-5 2026-02-09 07:03:27.022526 | orchestrator | 2026-02-09 07:03:27 - testbed-volume-4-node-4 2026-02-09 07:03:27.065965 | orchestrator | 2026-02-09 07:03:27 - testbed-volume-7-node-4 2026-02-09 07:03:27.109088 | orchestrator | 2026-02-09 07:03:27 - testbed-volume-0-node-3 2026-02-09 07:03:27.152731 | orchestrator | 2026-02-09 07:03:27 - testbed-volume-1-node-4 2026-02-09 07:03:27.193345 | orchestrator | 2026-02-09 07:03:27 - testbed-volume-2-node-5 2026-02-09 07:03:27.233677 | orchestrator | 2026-02-09 07:03:27 - testbed-volume-5-node-5 2026-02-09 07:03:27.280519 | orchestrator | 2026-02-09 07:03:27 - testbed-volume-manager-base 2026-02-09 07:03:27.322587 | orchestrator | 2026-02-09 07:03:27 - testbed-volume-3-node-3 2026-02-09 07:03:27.368849 | orchestrator | 2026-02-09 07:03:27 - disconnect routers 2026-02-09 07:03:27.506390 | orchestrator | 2026-02-09 07:03:27 - testbed 2026-02-09 07:03:28.530471 | orchestrator | 2026-02-09 07:03:28 - clean up subnets 2026-02-09 07:03:28.588924 | orchestrator | 2026-02-09 07:03:28 - subnet-testbed-management 2026-02-09 07:03:28.767716 | orchestrator | 2026-02-09 07:03:28 - clean up networks 2026-02-09 07:03:28.951245 | orchestrator | 2026-02-09 07:03:28 - net-testbed-management 2026-02-09 07:03:29.260489 | orchestrator | 2026-02-09 07:03:29 - clean up security groups 2026-02-09 07:03:29.308207 | orchestrator | 2026-02-09 07:03:29 - testbed-node 2026-02-09 07:03:29.416584 | orchestrator | 2026-02-09 07:03:29 - testbed-management 2026-02-09 07:03:29.547661 | orchestrator | 2026-02-09 07:03:29 - clean up floating ips 2026-02-09 07:03:29.579052 | orchestrator | 2026-02-09 07:03:29 - 81.163.193.31 2026-02-09 07:03:29.966781 | orchestrator | 2026-02-09 07:03:29 - clean up routers 2026-02-09 07:03:30.071780 | orchestrator | 2026-02-09 07:03:30 - testbed 2026-02-09 07:03:31.273172 | orchestrator | ok: Runtime: 0:00:21.941094 2026-02-09 07:03:31.277698 | 2026-02-09 07:03:31.277866 | PLAY RECAP 2026-02-09 07:03:31.277982 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-09 07:03:31.278033 | 2026-02-09 07:03:31.409708 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-09 07:03:31.410718 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-09 07:03:32.133952 | 2026-02-09 07:03:32.134113 | PLAY [Cleanup play] 2026-02-09 07:03:32.149981 | 2026-02-09 07:03:32.150105 | TASK [Set cloud fact (Zuul deployment)] 2026-02-09 07:03:32.201486 | orchestrator | ok 2026-02-09 07:03:32.209212 | 2026-02-09 07:03:32.209346 | TASK [Set cloud fact (local deployment)] 2026-02-09 07:03:32.243228 | orchestrator | skipping: Conditional result was False 2026-02-09 07:03:32.261050 | 2026-02-09 07:03:32.261214 | TASK [Clean the cloud environment] 2026-02-09 07:03:33.421184 | orchestrator | 2026-02-09 07:03:33 - clean up servers 2026-02-09 07:03:33.888808 | orchestrator | 2026-02-09 07:03:33 - clean up keypairs 2026-02-09 07:03:33.902534 | orchestrator | 2026-02-09 07:03:33 - wait for servers to be gone 2026-02-09 07:03:33.942858 | orchestrator | 2026-02-09 07:03:33 - clean up ports 2026-02-09 07:03:34.026102 | orchestrator | 2026-02-09 07:03:34 - clean up volumes 2026-02-09 07:03:34.112267 | orchestrator | 2026-02-09 07:03:34 - disconnect routers 2026-02-09 07:03:34.137020 | orchestrator | 2026-02-09 07:03:34 - clean up subnets 2026-02-09 07:03:34.161217 | orchestrator | 2026-02-09 07:03:34 - clean up networks 2026-02-09 07:03:34.317193 | orchestrator | 2026-02-09 07:03:34 - clean up security groups 2026-02-09 07:03:34.349481 | orchestrator | 2026-02-09 07:03:34 - clean up floating ips 2026-02-09 07:03:34.371803 | orchestrator | 2026-02-09 07:03:34 - clean up routers 2026-02-09 07:03:34.806829 | orchestrator | ok: Runtime: 0:00:01.378489 2026-02-09 07:03:34.810595 | 2026-02-09 07:03:34.810796 | PLAY RECAP 2026-02-09 07:03:34.810978 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-09 07:03:34.811051 | 2026-02-09 07:03:34.938680 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-09 07:03:34.941190 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-09 07:03:35.702301 | 2026-02-09 07:03:35.702463 | PLAY [Base post-fetch] 2026-02-09 07:03:35.718318 | 2026-02-09 07:03:35.718451 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-09 07:03:35.773749 | orchestrator | skipping: Conditional result was False 2026-02-09 07:03:35.786920 | 2026-02-09 07:03:35.787112 | TASK [fetch-output : Set log path for single node] 2026-02-09 07:03:35.837224 | orchestrator | ok 2026-02-09 07:03:35.846358 | 2026-02-09 07:03:35.846492 | LOOP [fetch-output : Ensure local output dirs] 2026-02-09 07:03:36.332795 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/498f0a2532124dcf97529f6199660ac9/work/logs" 2026-02-09 07:03:36.597439 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/498f0a2532124dcf97529f6199660ac9/work/artifacts" 2026-02-09 07:03:36.857890 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/498f0a2532124dcf97529f6199660ac9/work/docs" 2026-02-09 07:03:36.872282 | 2026-02-09 07:03:36.872489 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-09 07:03:37.800962 | orchestrator | changed: .d..t...... ./ 2026-02-09 07:03:37.801305 | orchestrator | changed: All items complete 2026-02-09 07:03:37.801363 | 2026-02-09 07:03:38.498538 | orchestrator | changed: .d..t...... ./ 2026-02-09 07:03:39.242889 | orchestrator | changed: .d..t...... ./ 2026-02-09 07:03:39.267473 | 2026-02-09 07:03:39.267601 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-09 07:03:39.301981 | orchestrator | skipping: Conditional result was False 2026-02-09 07:03:39.306224 | orchestrator | skipping: Conditional result was False 2026-02-09 07:03:39.326390 | 2026-02-09 07:03:39.326504 | PLAY RECAP 2026-02-09 07:03:39.326578 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-09 07:03:39.326620 | 2026-02-09 07:03:39.448133 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-09 07:03:39.450542 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-09 07:03:40.218750 | 2026-02-09 07:03:40.218953 | PLAY [Base post] 2026-02-09 07:03:40.233930 | 2026-02-09 07:03:40.234088 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-09 07:03:41.311241 | orchestrator | changed 2026-02-09 07:03:41.320415 | 2026-02-09 07:03:41.320545 | PLAY RECAP 2026-02-09 07:03:41.320617 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-09 07:03:41.320717 | 2026-02-09 07:03:41.442890 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-09 07:03:41.444193 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-09 07:03:42.237875 | 2026-02-09 07:03:42.238052 | PLAY [Base post-logs] 2026-02-09 07:03:42.248951 | 2026-02-09 07:03:42.249094 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-09 07:03:42.714385 | localhost | changed 2026-02-09 07:03:42.732400 | 2026-02-09 07:03:42.732587 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-09 07:03:42.770028 | localhost | ok 2026-02-09 07:03:42.775254 | 2026-02-09 07:03:42.775413 | TASK [Set zuul-log-path fact] 2026-02-09 07:03:42.793435 | localhost | ok 2026-02-09 07:03:42.806190 | 2026-02-09 07:03:42.806323 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-09 07:03:42.844163 | localhost | ok 2026-02-09 07:03:42.851022 | 2026-02-09 07:03:42.851221 | TASK [upload-logs : Create log directories] 2026-02-09 07:03:43.362312 | localhost | changed 2026-02-09 07:03:43.366912 | 2026-02-09 07:03:43.367061 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-09 07:03:43.857543 | localhost -> localhost | ok: Runtime: 0:00:00.006770 2026-02-09 07:03:43.861610 | 2026-02-09 07:03:43.861742 | TASK [upload-logs : Upload logs to log server] 2026-02-09 07:03:44.437059 | localhost | Output suppressed because no_log was given 2026-02-09 07:03:44.441115 | 2026-02-09 07:03:44.441301 | LOOP [upload-logs : Compress console log and json output] 2026-02-09 07:03:44.506384 | localhost | skipping: Conditional result was False 2026-02-09 07:03:44.511469 | localhost | skipping: Conditional result was False 2026-02-09 07:03:44.523784 | 2026-02-09 07:03:44.523979 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-09 07:03:44.570925 | localhost | skipping: Conditional result was False 2026-02-09 07:03:44.571838 | 2026-02-09 07:03:44.575188 | localhost | skipping: Conditional result was False 2026-02-09 07:03:44.588106 | 2026-02-09 07:03:44.588335 | LOOP [upload-logs : Upload console log and json output]